00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 87 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3265 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.129 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.130 The recommended git tool is: git 00:00:00.130 using credential 00000000-0000-0000-0000-000000000002 00:00:00.133 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.193 > git --version # 'git version 2.39.2' 00:00:00.193 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.204 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.204 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.731 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.741 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.751 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.751 > git config core.sparsecheckout # timeout=10 00:00:04.761 > git read-tree -mu HEAD # timeout=10 00:00:04.778 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.796 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.796 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.904 [Pipeline] Start of Pipeline 00:00:04.919 [Pipeline] library 00:00:04.921 Loading library shm_lib@master 00:00:04.921 Library shm_lib@master is cached. Copying from home. 00:00:04.941 [Pipeline] node 00:00:04.955 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:04.957 [Pipeline] { 00:00:04.969 [Pipeline] catchError 00:00:04.970 [Pipeline] { 00:00:04.983 [Pipeline] wrap 00:00:04.993 [Pipeline] { 00:00:05.003 [Pipeline] stage 00:00:05.004 [Pipeline] { (Prologue) 00:00:05.200 [Pipeline] sh 00:00:05.481 + logger -p user.info -t JENKINS-CI 00:00:05.503 [Pipeline] echo 00:00:05.505 Node: WFP21 00:00:05.514 [Pipeline] sh 00:00:05.807 [Pipeline] setCustomBuildProperty 00:00:05.818 [Pipeline] echo 00:00:05.819 Cleanup processes 00:00:05.822 [Pipeline] sh 00:00:06.102 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.102 3225585 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.113 [Pipeline] sh 00:00:06.391 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.391 ++ grep -v 'sudo pgrep' 00:00:06.391 ++ awk '{print $1}' 00:00:06.391 + sudo kill -9 00:00:06.391 + true 00:00:06.403 [Pipeline] cleanWs 00:00:06.412 [WS-CLEANUP] Deleting project workspace... 00:00:06.412 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.418 [WS-CLEANUP] done 00:00:06.422 [Pipeline] setCustomBuildProperty 00:00:06.434 [Pipeline] sh 00:00:06.711 + sudo git config --global --replace-all safe.directory '*' 00:00:06.774 [Pipeline] httpRequest 00:00:06.802 [Pipeline] echo 00:00:06.803 Sorcerer 10.211.164.101 is alive 00:00:06.810 [Pipeline] httpRequest 00:00:06.814 HttpMethod: GET 00:00:06.814 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.815 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.817 Response Code: HTTP/1.1 200 OK 00:00:06.818 Success: Status code 200 is in the accepted range: 200,404 00:00:06.818 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.746 [Pipeline] sh 00:00:08.026 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.041 [Pipeline] httpRequest 00:00:08.069 [Pipeline] echo 00:00:08.071 Sorcerer 10.211.164.101 is alive 00:00:08.079 [Pipeline] httpRequest 00:00:08.084 HttpMethod: GET 00:00:08.085 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:08.086 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:08.097 Response Code: HTTP/1.1 200 OK 00:00:08.098 Success: Status code 200 is in the accepted range: 200,404 00:00:08.098 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:21.654 [Pipeline] sh 00:01:21.938 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:24.485 [Pipeline] sh 00:01:24.767 + git -C spdk log --oneline -n5 00:01:24.767 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:24.767 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:24.767 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:24.767 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:24.767 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:24.785 [Pipeline] withCredentials 00:01:24.796 > git --version # timeout=10 00:01:24.810 > git --version # 'git version 2.39.2' 00:01:24.827 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:24.829 [Pipeline] { 00:01:24.839 [Pipeline] retry 00:01:24.841 [Pipeline] { 00:01:24.857 [Pipeline] sh 00:01:25.139 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:25.718 [Pipeline] } 00:01:25.740 [Pipeline] // retry 00:01:25.747 [Pipeline] } 00:01:25.768 [Pipeline] // withCredentials 00:01:25.778 [Pipeline] httpRequest 00:01:25.794 [Pipeline] echo 00:01:25.796 Sorcerer 10.211.164.101 is alive 00:01:25.803 [Pipeline] httpRequest 00:01:25.807 HttpMethod: GET 00:01:25.807 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.808 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.811 Response Code: HTTP/1.1 200 OK 00:01:25.812 Success: Status code 200 is in the accepted range: 200,404 00:01:25.812 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.267 [Pipeline] sh 00:01:27.546 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:28.938 [Pipeline] sh 00:01:29.222 + git -C dpdk log --oneline -n5 00:01:29.222 eeb0605f11 version: 23.11.0 00:01:29.222 238778122a doc: update release notes for 23.11 00:01:29.222 46aa6b3cfc doc: fix description of RSS features 00:01:29.222 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:29.222 7e421ae345 devtools: support skipping forbid rule check 00:01:29.234 [Pipeline] } 00:01:29.253 [Pipeline] // stage 00:01:29.262 [Pipeline] stage 00:01:29.265 [Pipeline] { (Prepare) 00:01:29.289 [Pipeline] writeFile 00:01:29.307 [Pipeline] sh 00:01:29.591 + logger -p user.info -t JENKINS-CI 00:01:29.605 [Pipeline] sh 00:01:29.889 + logger -p user.info -t JENKINS-CI 00:01:29.908 [Pipeline] sh 00:01:30.231 + cat autorun-spdk.conf 00:01:30.231 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.231 SPDK_TEST_NVMF=1 00:01:30.231 SPDK_TEST_NVME_CLI=1 00:01:30.231 SPDK_TEST_NVMF_NICS=mlx5 00:01:30.231 SPDK_RUN_UBSAN=1 00:01:30.231 NET_TYPE=phy 00:01:30.231 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:30.231 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:30.239 RUN_NIGHTLY=1 00:01:30.251 [Pipeline] readFile 00:01:30.292 [Pipeline] withEnv 00:01:30.295 [Pipeline] { 00:01:30.313 [Pipeline] sh 00:01:30.599 + set -ex 00:01:30.599 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:30.599 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:30.599 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.599 ++ SPDK_TEST_NVMF=1 00:01:30.599 ++ SPDK_TEST_NVME_CLI=1 00:01:30.599 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:30.599 ++ SPDK_RUN_UBSAN=1 00:01:30.599 ++ NET_TYPE=phy 00:01:30.599 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:30.599 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:30.599 ++ RUN_NIGHTLY=1 00:01:30.599 + case $SPDK_TEST_NVMF_NICS in 00:01:30.599 + DRIVERS=mlx5_ib 00:01:30.599 + [[ -n mlx5_ib ]] 00:01:30.599 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:30.599 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:37.179 rmmod: ERROR: Module irdma is not currently loaded 00:01:37.179 rmmod: ERROR: Module i40iw is not currently loaded 00:01:37.179 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:37.179 + true 00:01:37.179 + for D in $DRIVERS 00:01:37.179 + sudo modprobe mlx5_ib 00:01:37.179 + exit 0 00:01:37.188 [Pipeline] } 00:01:37.211 [Pipeline] // withEnv 00:01:37.217 [Pipeline] } 00:01:37.240 [Pipeline] // stage 00:01:37.249 [Pipeline] catchError 00:01:37.250 [Pipeline] { 00:01:37.262 [Pipeline] timeout 00:01:37.262 Timeout set to expire in 1 hr 0 min 00:01:37.264 [Pipeline] { 00:01:37.274 [Pipeline] stage 00:01:37.276 [Pipeline] { (Tests) 00:01:37.291 [Pipeline] sh 00:01:37.603 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:37.603 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:37.603 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:37.603 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:37.603 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:37.603 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:37.603 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:37.603 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:37.603 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:37.603 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:37.603 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:37.603 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:37.603 + source /etc/os-release 00:01:37.603 ++ NAME='Fedora Linux' 00:01:37.603 ++ VERSION='38 (Cloud Edition)' 00:01:37.603 ++ ID=fedora 00:01:37.603 ++ VERSION_ID=38 00:01:37.603 ++ VERSION_CODENAME= 00:01:37.603 ++ PLATFORM_ID=platform:f38 00:01:37.603 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:37.603 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:37.603 ++ LOGO=fedora-logo-icon 00:01:37.603 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:37.603 ++ HOME_URL=https://fedoraproject.org/ 00:01:37.603 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:37.603 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:37.603 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:37.603 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:37.603 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:37.603 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:37.603 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:37.603 ++ SUPPORT_END=2024-05-14 00:01:37.603 ++ VARIANT='Cloud Edition' 00:01:37.603 ++ VARIANT_ID=cloud 00:01:37.603 + uname -a 00:01:37.603 Linux spdk-wfp-21 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:37.603 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:40.893 Hugepages 00:01:40.893 node hugesize free / total 00:01:40.893 node0 1048576kB 0 / 0 00:01:40.893 node0 2048kB 0 / 0 00:01:40.893 node1 1048576kB 0 / 0 00:01:40.893 node1 2048kB 0 / 0 00:01:40.893 00:01:40.893 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.893 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:40.893 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:40.893 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:40.893 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:40.893 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:40.893 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:40.893 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:40.893 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:40.893 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:40.893 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:40.893 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:40.893 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:40.893 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:40.893 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:40.893 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:40.893 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:40.893 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:40.893 + rm -f /tmp/spdk-ld-path 00:01:40.893 + source autorun-spdk.conf 00:01:40.893 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.893 ++ SPDK_TEST_NVMF=1 00:01:40.893 ++ SPDK_TEST_NVME_CLI=1 00:01:40.893 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:40.893 ++ SPDK_RUN_UBSAN=1 00:01:40.893 ++ NET_TYPE=phy 00:01:40.893 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:40.893 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:40.893 ++ RUN_NIGHTLY=1 00:01:40.893 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:40.893 + [[ -n '' ]] 00:01:40.893 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:40.893 + for M in /var/spdk/build-*-manifest.txt 00:01:40.893 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:40.893 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:40.893 + for M in /var/spdk/build-*-manifest.txt 00:01:40.893 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:40.893 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:40.893 ++ uname 00:01:40.893 + [[ Linux == \L\i\n\u\x ]] 00:01:40.893 + sudo dmesg -T 00:01:40.893 + sudo dmesg --clear 00:01:40.893 + dmesg_pid=3227085 00:01:40.893 + [[ Fedora Linux == FreeBSD ]] 00:01:40.893 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.893 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.893 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:40.893 + [[ -x /usr/src/fio-static/fio ]] 00:01:40.893 + export FIO_BIN=/usr/src/fio-static/fio 00:01:40.893 + FIO_BIN=/usr/src/fio-static/fio 00:01:40.893 + sudo dmesg -Tw 00:01:40.893 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:40.893 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:40.893 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:40.893 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.893 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.893 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:40.893 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.893 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.893 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:40.893 Test configuration: 00:01:40.893 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.893 SPDK_TEST_NVMF=1 00:01:40.893 SPDK_TEST_NVME_CLI=1 00:01:40.893 SPDK_TEST_NVMF_NICS=mlx5 00:01:40.893 SPDK_RUN_UBSAN=1 00:01:40.893 NET_TYPE=phy 00:01:40.893 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:40.893 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:40.893 RUN_NIGHTLY=1 20:48:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:40.893 20:48:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:40.893 20:48:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:40.893 20:48:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:40.893 20:48:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.894 20:48:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.894 20:48:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.894 20:48:31 -- paths/export.sh@5 -- $ export PATH 00:01:40.894 20:48:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.894 20:48:31 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:40.894 20:48:31 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:40.894 20:48:31 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720896511.XXXXXX 00:01:40.894 20:48:31 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720896511.raQyvN 00:01:40.894 20:48:31 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:40.894 20:48:31 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:01:40.894 20:48:31 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:40.894 20:48:31 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:40.894 20:48:31 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:40.894 20:48:31 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:40.894 20:48:31 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:40.894 20:48:31 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:40.894 20:48:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.894 20:48:31 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:40.894 20:48:31 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:40.894 20:48:31 -- pm/common@17 -- $ local monitor 00:01:40.894 20:48:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.894 20:48:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.894 20:48:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.894 20:48:31 -- pm/common@21 -- $ date +%s 00:01:40.894 20:48:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.894 20:48:31 -- pm/common@21 -- $ date +%s 00:01:40.894 20:48:31 -- pm/common@25 -- $ sleep 1 00:01:40.894 20:48:31 -- pm/common@21 -- $ date +%s 00:01:40.894 20:48:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720896511 00:01:40.894 20:48:31 -- pm/common@21 -- $ date +%s 00:01:40.894 20:48:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720896511 00:01:40.894 20:48:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720896511 00:01:40.894 20:48:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720896511 00:01:40.894 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720896511_collect-cpu-load.pm.log 00:01:40.894 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720896511_collect-vmstat.pm.log 00:01:40.894 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720896511_collect-cpu-temp.pm.log 00:01:40.894 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720896511_collect-bmc-pm.bmc.pm.log 00:01:41.832 20:48:32 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:41.832 20:48:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.832 20:48:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.832 20:48:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:41.832 20:48:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.832 Sat Jul 13 06:48:32 PM UTC 2024 00:01:41.832 20:48:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.832 v24.05-13-g5fa2f5086 00:01:41.832 20:48:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:41.832 20:48:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.832 20:48:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.832 20:48:32 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:41.832 20:48:32 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:41.832 20:48:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.093 ************************************ 00:01:42.093 START TEST ubsan 00:01:42.093 ************************************ 00:01:42.093 20:48:32 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:42.093 using ubsan 00:01:42.093 00:01:42.093 real 0m0.001s 00:01:42.093 user 0m0.000s 00:01:42.093 sys 0m0.000s 00:01:42.093 20:48:32 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:42.093 20:48:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.093 ************************************ 00:01:42.093 END TEST ubsan 00:01:42.093 ************************************ 00:01:42.093 20:48:32 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:42.093 20:48:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:42.093 20:48:32 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:42.094 20:48:32 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:42.094 20:48:32 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:42.094 20:48:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.094 ************************************ 00:01:42.094 START TEST build_native_dpdk 00:01:42.094 ************************************ 00:01:42.094 20:48:32 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:42.094 20:48:32 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:42.094 20:48:32 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:42.095 20:48:32 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:42.096 20:48:32 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:42.096 20:48:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:42.096 20:48:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:42.096 20:48:32 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:42.096 20:48:32 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:42.096 20:48:32 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:42.097 eeb0605f11 version: 23.11.0 00:01:42.097 238778122a doc: update release notes for 23.11 00:01:42.097 46aa6b3cfc doc: fix description of RSS features 00:01:42.097 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:42.097 7e421ae345 devtools: support skipping forbid rule check 00:01:42.097 20:48:32 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:42.097 20:48:32 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:42.097 20:48:32 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:42.097 20:48:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:42.097 20:48:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:42.098 20:48:32 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:42.098 patching file config/rte_config.h 00:01:42.098 Hunk #1 succeeded at 60 (offset 1 line). 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:42.098 20:48:32 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:47.374 The Meson build system 00:01:47.374 Version: 1.3.1 00:01:47.374 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:47.374 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:47.374 Build type: native build 00:01:47.374 Program cat found: YES (/usr/bin/cat) 00:01:47.374 Project name: DPDK 00:01:47.374 Project version: 23.11.0 00:01:47.374 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:47.374 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:47.374 Host machine cpu family: x86_64 00:01:47.374 Host machine cpu: x86_64 00:01:47.374 Message: ## Building in Developer Mode ## 00:01:47.374 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.374 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:47.374 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.374 Program python3 found: YES (/usr/bin/python3) 00:01:47.374 Program cat found: YES (/usr/bin/cat) 00:01:47.374 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:47.374 Compiler for C supports arguments -march=native: YES 00:01:47.374 Checking for size of "void *" : 8 00:01:47.374 Checking for size of "void *" : 8 (cached) 00:01:47.374 Library m found: YES 00:01:47.374 Library numa found: YES 00:01:47.374 Has header "numaif.h" : YES 00:01:47.374 Library fdt found: NO 00:01:47.374 Library execinfo found: NO 00:01:47.374 Has header "execinfo.h" : YES 00:01:47.374 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:47.374 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.374 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.374 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.374 Run-time dependency openssl found: YES 3.0.9 00:01:47.374 Run-time dependency libpcap found: YES 1.10.4 00:01:47.374 Has header "pcap.h" with dependency libpcap: YES 00:01:47.374 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.374 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.374 Compiler for C supports arguments -Wformat: YES 00:01:47.374 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:47.374 Compiler for C supports arguments -Wformat-security: NO 00:01:47.374 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.374 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.374 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.374 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.374 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.374 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.374 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.374 Compiler for C supports arguments -Wundef: YES 00:01:47.374 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.374 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.374 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:47.374 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.374 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:47.374 Program objdump found: YES (/usr/bin/objdump) 00:01:47.374 Compiler for C supports arguments -mavx512f: YES 00:01:47.374 Checking if "AVX512 checking" compiles: YES 00:01:47.374 Fetching value of define "__SSE4_2__" : 1 00:01:47.374 Fetching value of define "__AES__" : 1 00:01:47.374 Fetching value of define "__AVX__" : 1 00:01:47.374 Fetching value of define "__AVX2__" : 1 00:01:47.374 Fetching value of define "__AVX512BW__" : 1 00:01:47.374 Fetching value of define "__AVX512CD__" : 1 00:01:47.374 Fetching value of define "__AVX512DQ__" : 1 00:01:47.374 Fetching value of define "__AVX512F__" : 1 00:01:47.374 Fetching value of define "__AVX512VL__" : 1 00:01:47.374 Fetching value of define "__PCLMUL__" : 1 00:01:47.374 Fetching value of define "__RDRND__" : 1 00:01:47.374 Fetching value of define "__RDSEED__" : 1 00:01:47.374 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.374 Fetching value of define "__znver1__" : (undefined) 00:01:47.374 Fetching value of define "__znver2__" : (undefined) 00:01:47.374 Fetching value of define "__znver3__" : (undefined) 00:01:47.374 Fetching value of define "__znver4__" : (undefined) 00:01:47.374 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:47.374 Message: lib/log: Defining dependency "log" 00:01:47.374 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.374 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.374 Checking for function "getentropy" : NO 00:01:47.374 Message: lib/eal: Defining dependency "eal" 00:01:47.374 Message: lib/ring: Defining dependency "ring" 00:01:47.374 Message: lib/rcu: Defining dependency "rcu" 00:01:47.374 Message: lib/mempool: Defining dependency "mempool" 00:01:47.374 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.374 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.374 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:47.374 Compiler for C supports arguments -mpclmul: YES 00:01:47.374 Compiler for C supports arguments -maes: YES 00:01:47.374 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.374 Compiler for C supports arguments -mavx512bw: YES 00:01:47.374 Compiler for C supports arguments -mavx512dq: YES 00:01:47.374 Compiler for C supports arguments -mavx512vl: YES 00:01:47.374 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.374 Compiler for C supports arguments -mavx2: YES 00:01:47.374 Compiler for C supports arguments -mavx: YES 00:01:47.374 Message: lib/net: Defining dependency "net" 00:01:47.374 Message: lib/meter: Defining dependency "meter" 00:01:47.374 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.374 Message: lib/pci: Defining dependency "pci" 00:01:47.374 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.374 Message: lib/metrics: Defining dependency "metrics" 00:01:47.374 Message: lib/hash: Defining dependency "hash" 00:01:47.374 Message: lib/timer: Defining dependency "timer" 00:01:47.374 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.374 Message: lib/acl: Defining dependency "acl" 00:01:47.374 Message: lib/bbdev: Defining dependency "bbdev" 00:01:47.374 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:47.374 Run-time dependency libelf found: YES 0.190 00:01:47.374 Message: lib/bpf: Defining dependency "bpf" 00:01:47.374 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:47.374 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.374 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.374 Message: lib/distributor: Defining dependency "distributor" 00:01:47.374 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.374 Message: lib/efd: Defining dependency "efd" 00:01:47.374 Message: lib/eventdev: Defining dependency "eventdev" 00:01:47.374 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:47.374 Message: lib/gpudev: Defining dependency "gpudev" 00:01:47.374 Message: lib/gro: Defining dependency "gro" 00:01:47.374 Message: lib/gso: Defining dependency "gso" 00:01:47.374 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:47.374 Message: lib/jobstats: Defining dependency "jobstats" 00:01:47.374 Message: lib/latencystats: Defining dependency "latencystats" 00:01:47.374 Message: lib/lpm: Defining dependency "lpm" 00:01:47.374 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:47.374 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:47.374 Message: lib/member: Defining dependency "member" 00:01:47.374 Message: lib/pcapng: Defining dependency "pcapng" 00:01:47.374 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.374 Message: lib/power: Defining dependency "power" 00:01:47.374 Message: lib/rawdev: Defining dependency "rawdev" 00:01:47.374 Message: lib/regexdev: Defining dependency "regexdev" 00:01:47.374 Message: lib/mldev: Defining dependency "mldev" 00:01:47.374 Message: lib/rib: Defining dependency "rib" 00:01:47.374 Message: lib/reorder: Defining dependency "reorder" 00:01:47.374 Message: lib/sched: Defining dependency "sched" 00:01:47.374 Message: lib/security: Defining dependency "security" 00:01:47.374 Message: lib/stack: Defining dependency "stack" 00:01:47.374 Has header "linux/userfaultfd.h" : YES 00:01:47.374 Has header "linux/vduse.h" : YES 00:01:47.374 Message: lib/vhost: Defining dependency "vhost" 00:01:47.374 Message: lib/ipsec: Defining dependency "ipsec" 00:01:47.374 Message: lib/pdcp: Defining dependency "pdcp" 00:01:47.374 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.374 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.374 Message: lib/fib: Defining dependency "fib" 00:01:47.374 Message: lib/port: Defining dependency "port" 00:01:47.374 Message: lib/pdump: Defining dependency "pdump" 00:01:47.374 Message: lib/table: Defining dependency "table" 00:01:47.374 Message: lib/pipeline: Defining dependency "pipeline" 00:01:47.374 Message: lib/graph: Defining dependency "graph" 00:01:47.374 Message: lib/node: Defining dependency "node" 00:01:47.374 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:47.942 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.942 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.942 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:47.942 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:47.942 Compiler for C supports arguments -Wno-unused-value: YES 00:01:47.942 Compiler for C supports arguments -Wno-format: YES 00:01:47.942 Compiler for C supports arguments -Wno-format-security: YES 00:01:47.942 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:47.942 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:47.942 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:47.942 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:47.942 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.942 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.942 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.942 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:47.942 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:47.942 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:47.942 Has header "sys/epoll.h" : YES 00:01:47.942 Program doxygen found: YES (/usr/bin/doxygen) 00:01:47.942 Configuring doxy-api-html.conf using configuration 00:01:47.942 Configuring doxy-api-man.conf using configuration 00:01:47.942 Program mandb found: YES (/usr/bin/mandb) 00:01:47.942 Program sphinx-build found: NO 00:01:47.942 Configuring rte_build_config.h using configuration 00:01:47.942 Message: 00:01:47.942 ================= 00:01:47.942 Applications Enabled 00:01:47.942 ================= 00:01:47.942 00:01:47.942 apps: 00:01:47.942 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:47.942 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:47.942 test-pmd, test-regex, test-sad, test-security-perf, 00:01:47.942 00:01:47.942 Message: 00:01:47.942 ================= 00:01:47.942 Libraries Enabled 00:01:47.942 ================= 00:01:47.942 00:01:47.942 libs: 00:01:47.942 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:47.942 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:47.942 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:47.942 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:47.942 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:47.942 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:47.942 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:47.942 00:01:47.942 00:01:47.942 Message: 00:01:47.942 =============== 00:01:47.942 Drivers Enabled 00:01:47.942 =============== 00:01:47.942 00:01:47.942 common: 00:01:47.942 00:01:47.942 bus: 00:01:47.942 pci, vdev, 00:01:47.942 mempool: 00:01:47.942 ring, 00:01:47.942 dma: 00:01:47.942 00:01:47.942 net: 00:01:47.942 i40e, 00:01:47.942 raw: 00:01:47.942 00:01:47.942 crypto: 00:01:47.943 00:01:47.943 compress: 00:01:47.943 00:01:47.943 regex: 00:01:47.943 00:01:47.943 ml: 00:01:47.943 00:01:47.943 vdpa: 00:01:47.943 00:01:47.943 event: 00:01:47.943 00:01:47.943 baseband: 00:01:47.943 00:01:47.943 gpu: 00:01:47.943 00:01:47.943 00:01:47.943 Message: 00:01:47.943 ================= 00:01:47.943 Content Skipped 00:01:47.943 ================= 00:01:47.943 00:01:47.943 apps: 00:01:47.943 00:01:47.943 libs: 00:01:47.943 00:01:47.943 drivers: 00:01:47.943 common/cpt: not in enabled drivers build config 00:01:47.943 common/dpaax: not in enabled drivers build config 00:01:47.943 common/iavf: not in enabled drivers build config 00:01:47.943 common/idpf: not in enabled drivers build config 00:01:47.943 common/mvep: not in enabled drivers build config 00:01:47.943 common/octeontx: not in enabled drivers build config 00:01:47.943 bus/auxiliary: not in enabled drivers build config 00:01:47.943 bus/cdx: not in enabled drivers build config 00:01:47.943 bus/dpaa: not in enabled drivers build config 00:01:47.943 bus/fslmc: not in enabled drivers build config 00:01:47.943 bus/ifpga: not in enabled drivers build config 00:01:47.943 bus/platform: not in enabled drivers build config 00:01:47.943 bus/vmbus: not in enabled drivers build config 00:01:47.943 common/cnxk: not in enabled drivers build config 00:01:47.943 common/mlx5: not in enabled drivers build config 00:01:47.943 common/nfp: not in enabled drivers build config 00:01:47.943 common/qat: not in enabled drivers build config 00:01:47.943 common/sfc_efx: not in enabled drivers build config 00:01:47.943 mempool/bucket: not in enabled drivers build config 00:01:47.943 mempool/cnxk: not in enabled drivers build config 00:01:47.943 mempool/dpaa: not in enabled drivers build config 00:01:47.943 mempool/dpaa2: not in enabled drivers build config 00:01:47.943 mempool/octeontx: not in enabled drivers build config 00:01:47.943 mempool/stack: not in enabled drivers build config 00:01:47.943 dma/cnxk: not in enabled drivers build config 00:01:47.943 dma/dpaa: not in enabled drivers build config 00:01:47.943 dma/dpaa2: not in enabled drivers build config 00:01:47.943 dma/hisilicon: not in enabled drivers build config 00:01:47.943 dma/idxd: not in enabled drivers build config 00:01:47.943 dma/ioat: not in enabled drivers build config 00:01:47.943 dma/skeleton: not in enabled drivers build config 00:01:47.943 net/af_packet: not in enabled drivers build config 00:01:47.943 net/af_xdp: not in enabled drivers build config 00:01:47.943 net/ark: not in enabled drivers build config 00:01:47.943 net/atlantic: not in enabled drivers build config 00:01:47.943 net/avp: not in enabled drivers build config 00:01:47.943 net/axgbe: not in enabled drivers build config 00:01:47.943 net/bnx2x: not in enabled drivers build config 00:01:47.943 net/bnxt: not in enabled drivers build config 00:01:47.943 net/bonding: not in enabled drivers build config 00:01:47.943 net/cnxk: not in enabled drivers build config 00:01:47.943 net/cpfl: not in enabled drivers build config 00:01:47.943 net/cxgbe: not in enabled drivers build config 00:01:47.943 net/dpaa: not in enabled drivers build config 00:01:47.943 net/dpaa2: not in enabled drivers build config 00:01:47.943 net/e1000: not in enabled drivers build config 00:01:47.943 net/ena: not in enabled drivers build config 00:01:47.943 net/enetc: not in enabled drivers build config 00:01:47.943 net/enetfec: not in enabled drivers build config 00:01:47.943 net/enic: not in enabled drivers build config 00:01:47.943 net/failsafe: not in enabled drivers build config 00:01:47.943 net/fm10k: not in enabled drivers build config 00:01:47.943 net/gve: not in enabled drivers build config 00:01:47.943 net/hinic: not in enabled drivers build config 00:01:47.943 net/hns3: not in enabled drivers build config 00:01:47.943 net/iavf: not in enabled drivers build config 00:01:47.943 net/ice: not in enabled drivers build config 00:01:47.943 net/idpf: not in enabled drivers build config 00:01:47.943 net/igc: not in enabled drivers build config 00:01:47.943 net/ionic: not in enabled drivers build config 00:01:47.943 net/ipn3ke: not in enabled drivers build config 00:01:47.943 net/ixgbe: not in enabled drivers build config 00:01:47.943 net/mana: not in enabled drivers build config 00:01:47.943 net/memif: not in enabled drivers build config 00:01:47.943 net/mlx4: not in enabled drivers build config 00:01:47.943 net/mlx5: not in enabled drivers build config 00:01:47.943 net/mvneta: not in enabled drivers build config 00:01:47.943 net/mvpp2: not in enabled drivers build config 00:01:47.943 net/netvsc: not in enabled drivers build config 00:01:47.943 net/nfb: not in enabled drivers build config 00:01:47.943 net/nfp: not in enabled drivers build config 00:01:47.943 net/ngbe: not in enabled drivers build config 00:01:47.943 net/null: not in enabled drivers build config 00:01:47.943 net/octeontx: not in enabled drivers build config 00:01:47.943 net/octeon_ep: not in enabled drivers build config 00:01:47.943 net/pcap: not in enabled drivers build config 00:01:47.943 net/pfe: not in enabled drivers build config 00:01:47.943 net/qede: not in enabled drivers build config 00:01:47.943 net/ring: not in enabled drivers build config 00:01:47.943 net/sfc: not in enabled drivers build config 00:01:47.943 net/softnic: not in enabled drivers build config 00:01:47.943 net/tap: not in enabled drivers build config 00:01:47.943 net/thunderx: not in enabled drivers build config 00:01:47.943 net/txgbe: not in enabled drivers build config 00:01:47.943 net/vdev_netvsc: not in enabled drivers build config 00:01:47.943 net/vhost: not in enabled drivers build config 00:01:47.943 net/virtio: not in enabled drivers build config 00:01:47.943 net/vmxnet3: not in enabled drivers build config 00:01:47.943 raw/cnxk_bphy: not in enabled drivers build config 00:01:47.943 raw/cnxk_gpio: not in enabled drivers build config 00:01:47.943 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:47.943 raw/ifpga: not in enabled drivers build config 00:01:47.943 raw/ntb: not in enabled drivers build config 00:01:47.943 raw/skeleton: not in enabled drivers build config 00:01:47.943 crypto/armv8: not in enabled drivers build config 00:01:47.943 crypto/bcmfs: not in enabled drivers build config 00:01:47.943 crypto/caam_jr: not in enabled drivers build config 00:01:47.943 crypto/ccp: not in enabled drivers build config 00:01:47.943 crypto/cnxk: not in enabled drivers build config 00:01:47.943 crypto/dpaa_sec: not in enabled drivers build config 00:01:47.943 crypto/dpaa2_sec: not in enabled drivers build config 00:01:47.943 crypto/ipsec_mb: not in enabled drivers build config 00:01:47.943 crypto/mlx5: not in enabled drivers build config 00:01:47.943 crypto/mvsam: not in enabled drivers build config 00:01:47.943 crypto/nitrox: not in enabled drivers build config 00:01:47.943 crypto/null: not in enabled drivers build config 00:01:47.943 crypto/octeontx: not in enabled drivers build config 00:01:47.943 crypto/openssl: not in enabled drivers build config 00:01:47.943 crypto/scheduler: not in enabled drivers build config 00:01:47.943 crypto/uadk: not in enabled drivers build config 00:01:47.943 crypto/virtio: not in enabled drivers build config 00:01:47.943 compress/isal: not in enabled drivers build config 00:01:47.943 compress/mlx5: not in enabled drivers build config 00:01:47.943 compress/octeontx: not in enabled drivers build config 00:01:47.943 compress/zlib: not in enabled drivers build config 00:01:47.943 regex/mlx5: not in enabled drivers build config 00:01:47.943 regex/cn9k: not in enabled drivers build config 00:01:47.943 ml/cnxk: not in enabled drivers build config 00:01:47.943 vdpa/ifc: not in enabled drivers build config 00:01:47.943 vdpa/mlx5: not in enabled drivers build config 00:01:47.943 vdpa/nfp: not in enabled drivers build config 00:01:47.943 vdpa/sfc: not in enabled drivers build config 00:01:47.943 event/cnxk: not in enabled drivers build config 00:01:47.943 event/dlb2: not in enabled drivers build config 00:01:47.943 event/dpaa: not in enabled drivers build config 00:01:47.943 event/dpaa2: not in enabled drivers build config 00:01:47.943 event/dsw: not in enabled drivers build config 00:01:47.943 event/opdl: not in enabled drivers build config 00:01:47.943 event/skeleton: not in enabled drivers build config 00:01:47.943 event/sw: not in enabled drivers build config 00:01:47.943 event/octeontx: not in enabled drivers build config 00:01:47.943 baseband/acc: not in enabled drivers build config 00:01:47.943 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:47.943 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:47.943 baseband/la12xx: not in enabled drivers build config 00:01:47.943 baseband/null: not in enabled drivers build config 00:01:47.943 baseband/turbo_sw: not in enabled drivers build config 00:01:47.943 gpu/cuda: not in enabled drivers build config 00:01:47.943 00:01:47.943 00:01:47.943 Build targets in project: 217 00:01:47.943 00:01:47.943 DPDK 23.11.0 00:01:47.943 00:01:47.943 User defined options 00:01:47.943 libdir : lib 00:01:47.943 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:47.943 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:47.943 c_link_args : 00:01:47.943 enable_docs : false 00:01:47.943 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:47.943 enable_kmods : false 00:01:47.943 machine : native 00:01:47.943 tests : false 00:01:47.943 00:01:47.943 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.943 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:48.213 20:48:38 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:01:48.213 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:48.213 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.213 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.480 [3/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.480 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.480 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.480 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.480 [7/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.480 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.480 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.480 [10/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.480 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.480 [12/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.480 [13/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.480 [14/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.480 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.480 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.480 [17/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.480 [18/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.480 [19/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.480 [20/707] Linking static target lib/librte_kvargs.a 00:01:48.480 [21/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.480 [22/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.480 [23/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.480 [24/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.480 [25/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.480 [26/707] Linking static target lib/librte_pci.a 00:01:48.480 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.480 [28/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.480 [29/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.480 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.480 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.480 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.480 [33/707] Linking static target lib/librte_log.a 00:01:48.738 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.738 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.738 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.738 [37/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.000 [38/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:49.000 [39/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.000 [40/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:49.000 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:49.000 [42/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:49.000 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:49.000 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:49.000 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:49.000 [46/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:49.000 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:49.000 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:49.000 [49/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:49.000 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:49.000 [51/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:49.000 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:49.000 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:49.000 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:49.000 [55/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:49.000 [56/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:49.000 [57/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:49.000 [58/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.000 [59/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:49.000 [60/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.000 [61/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.000 [62/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.000 [63/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.000 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:49.000 [65/707] Linking static target lib/librte_meter.a 00:01:49.000 [66/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.000 [67/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:49.000 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.000 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:49.000 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:49.000 [71/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:49.000 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:49.000 [73/707] Linking static target lib/librte_ring.a 00:01:49.000 [74/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:49.000 [75/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:49.000 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:49.000 [77/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:49.000 [78/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.000 [79/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:49.000 [80/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.260 [81/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:49.260 [82/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.260 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:49.260 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:49.260 [85/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:49.260 [86/707] Linking static target lib/librte_cmdline.a 00:01:49.260 [87/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.260 [88/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:49.260 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:49.260 [90/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:49.260 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:49.260 [92/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:49.260 [93/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.260 [94/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:49.260 [95/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:49.260 [96/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:49.260 [97/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:49.260 [98/707] Linking static target lib/librte_metrics.a 00:01:49.260 [99/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:49.260 [100/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:49.260 [101/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:49.260 [102/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.260 [103/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:49.260 [104/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.260 [105/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:49.260 [106/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:49.260 [107/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.260 [108/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.260 [109/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.260 [110/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:49.260 [111/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:49.260 [112/707] Linking static target lib/librte_net.a 00:01:49.260 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.260 [114/707] Linking static target lib/librte_bitratestats.a 00:01:49.260 [115/707] Linking static target lib/librte_cfgfile.a 00:01:49.260 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:49.260 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.260 [118/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:49.260 [119/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.526 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.526 [121/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.526 [122/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:49.526 [123/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:49.526 [124/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:49.526 [125/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:49.526 [126/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:49.526 [127/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:49.526 [128/707] Linking target lib/librte_log.so.24.0 00:01:49.526 [129/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.526 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.526 [131/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:49.526 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:49.526 [133/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:49.526 [134/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:49.526 [135/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:49.526 [136/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:49.526 [137/707] Linking static target lib/librte_timer.a 00:01:49.527 [138/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:49.527 [139/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:49.527 [140/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.527 [141/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:49.527 [142/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:49.527 [143/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:49.527 [144/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:49.527 [145/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:49.527 [146/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.527 [147/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:49.787 [148/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.787 [149/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:49.787 [150/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:49.787 [151/707] Linking static target lib/librte_mempool.a 00:01:49.787 [152/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:49.787 [153/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:49.787 [154/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:49.787 [155/707] Linking static target lib/librte_bbdev.a 00:01:49.787 [156/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.787 [157/707] Linking target lib/librte_kvargs.so.24.0 00:01:49.787 [158/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.787 [159/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:49.787 [160/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:49.787 [161/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:49.787 [162/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:49.787 [163/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:49.787 [164/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:49.787 [165/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.787 [166/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:49.787 [167/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.787 [168/707] Linking static target lib/librte_jobstats.a 00:01:49.787 [169/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:49.787 [170/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:49.787 [171/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:49.787 [172/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.787 [173/707] Linking static target lib/librte_compressdev.a 00:01:49.787 [174/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:49.787 [175/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:49.787 [176/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:49.787 [177/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.787 [178/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:49.787 [179/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:49.787 [180/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:49.787 [181/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:49.787 [182/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:49.787 [183/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:50.051 [184/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:50.051 [185/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:50.051 [186/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:50.051 [187/707] Linking static target lib/librte_dispatcher.a 00:01:50.051 [188/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:50.051 [189/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:50.051 [190/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:50.051 [191/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:50.051 [192/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:50.051 [193/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.051 [194/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:50.051 [195/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:50.051 [196/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:50.051 [197/707] Linking static target lib/librte_latencystats.a 00:01:50.051 [198/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.051 [199/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:50.051 [200/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:50.051 [201/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:50.051 [202/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:50.051 [203/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:50.051 [204/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:50.051 [205/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:50.051 [206/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:50.051 [207/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:50.051 [208/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:50.051 [209/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:50.051 [210/707] Linking static target lib/librte_telemetry.a 00:01:50.051 [211/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:50.051 [212/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:50.051 [213/707] Linking static target lib/librte_rcu.a 00:01:50.051 [214/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:50.051 [215/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:50.051 [216/707] Linking static target lib/librte_gpudev.a 00:01:50.051 [217/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.051 [218/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:50.051 [219/707] Linking static target lib/librte_stack.a 00:01:50.051 [220/707] Linking static target lib/librte_eal.a 00:01:50.052 [221/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:50.052 [222/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.052 [223/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.052 [224/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:50.052 [225/707] Linking static target lib/librte_gro.a 00:01:50.052 [226/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:50.052 [227/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:50.052 [228/707] Linking static target lib/librte_dmadev.a 00:01:50.052 [229/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.315 [230/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:50.315 [231/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:50.315 [232/707] Linking static target lib/librte_gso.a 00:01:50.315 [233/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:50.315 [234/707] Linking static target lib/librte_distributor.a 00:01:50.315 [235/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:50.315 [236/707] Linking static target lib/librte_regexdev.a 00:01:50.315 [237/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.315 [238/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.315 [239/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.315 [240/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:50.315 [241/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:50.315 [242/707] Linking static target lib/librte_mbuf.a 00:01:50.315 [243/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:50.315 [244/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:50.315 [245/707] Linking static target lib/librte_rawdev.a 00:01:50.315 [246/707] Linking static target lib/librte_power.a 00:01:50.315 [247/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:50.315 [248/707] Linking static target lib/librte_mldev.a 00:01:50.315 [249/707] Linking static target lib/librte_ip_frag.a 00:01:50.315 [250/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:50.315 [251/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:50.315 [252/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:50.315 [253/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.315 [254/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.579 [255/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:50.579 [256/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.579 [257/707] Linking static target lib/librte_reorder.a 00:01:50.579 [258/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:50.579 [259/707] Linking static target lib/librte_bpf.a 00:01:50.579 [260/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:50.579 [261/707] Linking static target lib/librte_pcapng.a 00:01:50.579 [262/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.579 [263/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:50.579 [264/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:50.579 [265/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:50.579 [266/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.579 [267/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:50.579 [268/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:50.579 [269/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.579 [270/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.579 [271/707] Linking static target lib/librte_security.a 00:01:50.579 [272/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.579 [273/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.579 [274/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.579 [275/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:50.579 [276/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.579 [277/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.580 [278/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.580 [279/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:50.580 [280/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.580 [281/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:50.580 [282/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.580 [283/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:50.580 [284/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:50.842 [285/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.842 [286/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:50.842 [287/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.842 [288/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:50.842 [289/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.842 [290/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.842 [291/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:50.842 [292/707] Linking static target lib/librte_lpm.a 00:01:50.842 [293/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.842 [294/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:50.842 [295/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:50.842 [296/707] Linking static target lib/librte_rib.a 00:01:50.842 [297/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.842 [298/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:50.842 [299/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:50.842 [300/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:50.842 [301/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:50.842 [302/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.842 [303/707] Linking target lib/librte_telemetry.so.24.0 00:01:50.842 [304/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:50.842 [305/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:50.842 [306/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:50.842 [307/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.842 [308/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:50.842 [309/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:50.842 [310/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:51.103 [311/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:51.103 [312/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.103 [313/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:51.103 [314/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:51.103 [315/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.103 [316/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:51.103 [317/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:51.103 [318/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:51.103 [319/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:51.103 [320/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:51.103 [321/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:51.103 [322/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:51.103 [323/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:51.103 [324/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:51.103 [325/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:51.103 [326/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:51.103 [327/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.103 [328/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:51.103 [329/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:51.103 [330/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:51.103 [331/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:51.103 [332/707] Linking static target lib/librte_efd.a 00:01:51.103 [333/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:51.103 [334/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:51.103 [335/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:51.366 [336/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:51.366 [337/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:51.366 [338/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.366 [339/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:51.366 [340/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:51.366 [341/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:51.366 [342/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.366 [343/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.366 [344/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:51.366 [345/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:51.366 [346/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:51.366 [347/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:51.366 [348/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:51.366 [349/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:51.366 [350/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:51.366 [351/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:51.366 [352/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:51.366 [353/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.366 [354/707] Linking static target lib/librte_fib.a 00:01:51.366 [355/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:51.366 [356/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:51.366 [357/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.366 [358/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:51.366 [359/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:51.630 [360/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:51.630 [361/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:51.630 [362/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.630 [363/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.630 [364/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:51.630 [365/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:51.630 [366/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:51.630 [367/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:51.630 [368/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.630 [369/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:51.630 [370/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:51.630 [371/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:51.630 [372/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.630 [373/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:51.630 [374/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:51.630 [375/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:51.630 [376/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:51.630 [377/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:51.630 [378/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:51.630 [379/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:51.630 [380/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:51.630 [381/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:51.896 [382/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:51.896 [383/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:51.896 [384/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:51.896 [385/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:51.896 [386/707] Linking static target lib/librte_pdump.a 00:01:51.896 [387/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:51.896 [388/707] Linking static target lib/librte_graph.a 00:01:51.896 [389/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:51.896 [390/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:51.896 [391/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:51.896 [392/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:51.896 [393/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:51.896 [394/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:51.896 [395/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:51.896 [396/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:51.896 [397/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:51.896 [398/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:51.896 [399/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:51.896 [400/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:51.896 [401/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:51.896 [402/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:51.896 [403/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:51.896 [404/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:51.896 [405/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:51.896 [406/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:51.896 [407/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.156 [408/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.156 [409/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:52.156 [410/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.156 [411/707] Linking static target drivers/librte_bus_vdev.a 00:01:52.156 [412/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:52.156 [413/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:52.156 [414/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:52.156 [415/707] Linking static target lib/librte_table.a 00:01:52.156 [416/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:52.156 [417/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:52.156 [418/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:52.156 [419/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:52.156 [420/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:52.156 [421/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:52.156 [422/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:52.156 [423/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:52.156 [424/707] Linking static target lib/librte_sched.a 00:01:52.156 [425/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:52.156 [426/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:52.156 [427/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:52.156 [428/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:52.156 [429/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.156 [430/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.156 [431/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:52.156 [432/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.156 [433/707] Linking static target lib/librte_cryptodev.a 00:01:52.156 [434/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.442 [435/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:52.442 [436/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.442 [437/707] Linking static target drivers/librte_bus_pci.a 00:01:52.442 [438/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:52.442 [439/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:52.442 [440/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:52.442 [441/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:52.442 [442/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:52.442 [443/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:52.442 [444/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:52.442 [445/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:52.442 [446/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:52.442 [447/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:52.442 [448/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:52.442 [449/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:52.442 [450/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:52.442 [451/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:52.443 [452/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.443 [453/707] Linking static target lib/librte_ipsec.a 00:01:52.443 [454/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:52.443 [455/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:52.443 [456/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:52.443 [457/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.747 [458/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:52.747 [459/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.747 [460/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:52.747 [461/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:52.747 [462/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:52.747 [463/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:52.747 [464/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:52.747 [465/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:52.747 [466/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:52.747 [467/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.747 [468/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:52.747 [469/707] Linking static target lib/librte_member.a 00:01:52.747 [470/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:52.747 [471/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:52.747 [472/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:52.747 [473/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:52.747 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:52.747 [475/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:52.747 [476/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:52.747 [477/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:52.747 [478/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:52.747 [479/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:52.747 [480/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:52.747 [481/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:52.747 [482/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:52.747 [483/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:52.747 [484/707] Linking static target lib/librte_pdcp.a 00:01:52.747 [485/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:52.747 [486/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:52.747 [487/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.747 [488/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:52.747 [489/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:52.747 [490/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.747 [491/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:52.747 [492/707] Linking static target lib/librte_node.a 00:01:52.747 [493/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.747 [494/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:52.747 [495/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:52.747 [496/707] Linking static target lib/librte_hash.a 00:01:52.747 [497/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:52.747 [498/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.747 [499/707] Linking static target lib/librte_port.a 00:01:52.747 [500/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:52.747 [501/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:52.747 [502/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.747 [503/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.747 [504/707] Linking static target drivers/librte_mempool_ring.a 00:01:52.747 [505/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:52.747 [506/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:53.006 [507/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:53.006 [508/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.006 [509/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:53.006 [510/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:53.006 [511/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.006 [512/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:53.006 [513/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:53.006 [514/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.006 [515/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:53.006 [516/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:53.006 [517/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:53.006 [518/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:53.006 [519/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.006 [520/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:53.006 [521/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:53.006 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:53.006 [523/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:53.006 [524/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:53.006 [525/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:53.006 [526/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:53.006 [527/707] Linking static target lib/acl/libavx2_tmp.a 00:01:53.006 [528/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:53.006 [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:53.006 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:53.006 [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:53.006 [532/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:53.006 [533/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:53.265 [534/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:53.265 [535/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:53.265 [536/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:53.265 [537/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:53.265 [538/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.266 [539/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:53.266 [540/707] Linking static target lib/librte_acl.a 00:01:53.266 [541/707] Linking static target lib/librte_eventdev.a 00:01:53.266 [542/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:53.266 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:53.266 [544/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.266 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:53.266 [546/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:53.266 [547/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:53.266 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:53.266 [549/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:53.266 [550/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:53.266 [551/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:53.266 [552/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:53.525 [553/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:53.525 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:53.525 [555/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:53.525 [556/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:53.525 [557/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:53.525 [558/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:53.525 [559/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.525 [560/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:53.525 [561/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:53.525 [562/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:53.525 [563/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:53.525 [564/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.525 [565/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.525 [566/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:53.525 [567/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:53.525 [568/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:53.784 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:53.784 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:54.044 [571/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:54.044 [572/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.044 [573/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.303 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:54.303 [575/707] Linking static target lib/librte_ethdev.a 00:01:54.562 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:54.562 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:54.562 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:54.821 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:55.080 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:55.648 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:55.648 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:55.648 [583/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:55.907 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:55.907 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:55.907 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:55.907 [587/707] Linking static target drivers/librte_net_i40e.a 00:01:56.167 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.735 [589/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.735 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:56.994 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.562 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:02.834 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.834 [594/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.834 [595/707] Linking target lib/librte_eal.so.24.0 00:02:02.834 [596/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:02.834 [597/707] Linking target lib/librte_ring.so.24.0 00:02:02.834 [598/707] Linking target lib/librte_timer.so.24.0 00:02:02.834 [599/707] Linking target lib/librte_rawdev.so.24.0 00:02:02.834 [600/707] Linking target lib/librte_meter.so.24.0 00:02:02.834 [601/707] Linking target lib/librte_pci.so.24.0 00:02:02.834 [602/707] Linking target lib/librte_dmadev.so.24.0 00:02:02.834 [603/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:02.834 [604/707] Linking target lib/librte_jobstats.so.24.0 00:02:02.834 [605/707] Linking target lib/librte_cfgfile.so.24.0 00:02:02.834 [606/707] Linking target lib/librte_acl.so.24.0 00:02:02.834 [607/707] Linking target lib/librte_stack.so.24.0 00:02:02.834 [608/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:02.834 [609/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:02.834 [610/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:02.834 [611/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:02.834 [612/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:02.834 [613/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:02.834 [614/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:02.834 [615/707] Linking target lib/librte_rcu.so.24.0 00:02:02.834 [616/707] Linking target lib/librte_mempool.so.24.0 00:02:02.834 [617/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:03.102 [618/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:03.102 [619/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:03.102 [620/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:03.102 [621/707] Linking target lib/librte_rib.so.24.0 00:02:03.102 [622/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:03.102 [623/707] Linking target lib/librte_mbuf.so.24.0 00:02:03.102 [624/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:03.102 [625/707] Linking static target lib/librte_pipeline.a 00:02:03.102 [626/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:03.102 [627/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:03.102 [628/707] Linking target lib/librte_fib.so.24.0 00:02:03.102 [629/707] Linking target lib/librte_bbdev.so.24.0 00:02:03.102 [630/707] Linking target lib/librte_net.so.24.0 00:02:03.362 [631/707] Linking target lib/librte_mldev.so.24.0 00:02:03.362 [632/707] Linking target lib/librte_reorder.so.24.0 00:02:03.362 [633/707] Linking target lib/librte_distributor.so.24.0 00:02:03.362 [634/707] Linking target lib/librte_compressdev.so.24.0 00:02:03.362 [635/707] Linking target lib/librte_gpudev.so.24.0 00:02:03.362 [636/707] Linking target lib/librte_regexdev.so.24.0 00:02:03.362 [637/707] Linking target lib/librte_cryptodev.so.24.0 00:02:03.362 [638/707] Linking target lib/librte_sched.so.24.0 00:02:03.362 [639/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:03.362 [640/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:03.362 [641/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:03.362 [642/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:03.362 [643/707] Linking target lib/librte_hash.so.24.0 00:02:03.362 [644/707] Linking target lib/librte_security.so.24.0 00:02:03.362 [645/707] Linking target lib/librte_cmdline.so.24.0 00:02:03.362 [646/707] Linking target lib/librte_ethdev.so.24.0 00:02:03.620 [647/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:03.620 [648/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:03.620 [649/707] Linking target lib/librte_efd.so.24.0 00:02:03.620 [650/707] Linking target lib/librte_lpm.so.24.0 00:02:03.620 [651/707] Linking target lib/librte_member.so.24.0 00:02:03.620 [652/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:03.620 [653/707] Linking target lib/librte_ipsec.so.24.0 00:02:03.620 [654/707] Linking target lib/librte_pdcp.so.24.0 00:02:03.620 [655/707] Linking target lib/librte_gso.so.24.0 00:02:03.620 [656/707] Linking target lib/librte_metrics.so.24.0 00:02:03.620 [657/707] Linking target lib/librte_bpf.so.24.0 00:02:03.620 [658/707] Linking target lib/librte_pcapng.so.24.0 00:02:03.620 [659/707] Linking target lib/librte_gro.so.24.0 00:02:03.620 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:02:03.620 [661/707] Linking target lib/librte_power.so.24.0 00:02:03.620 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:03.620 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:03.620 [664/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:03.620 [665/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.879 [666/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:03.879 [667/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:03.879 [668/707] Linking static target lib/librte_vhost.a 00:02:03.879 [669/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:03.879 [670/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:03.879 [671/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:03.879 [672/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:03.879 [673/707] Linking target lib/librte_bitratestats.so.24.0 00:02:03.879 [674/707] Linking target lib/librte_latencystats.so.24.0 00:02:03.879 [675/707] Linking target lib/librte_graph.so.24.0 00:02:03.879 [676/707] Linking target lib/librte_pdump.so.24.0 00:02:03.879 [677/707] Linking target lib/librte_dispatcher.so.24.0 00:02:03.879 [678/707] Linking target lib/librte_port.so.24.0 00:02:03.879 [679/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:04.136 [680/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:04.136 [681/707] Linking target lib/librte_node.so.24.0 00:02:04.136 [682/707] Linking target lib/librte_table.so.24.0 00:02:04.136 [683/707] Linking target app/dpdk-proc-info 00:02:04.136 [684/707] Linking target app/dpdk-test-flow-perf 00:02:04.136 [685/707] Linking target app/dpdk-pdump 00:02:04.136 [686/707] Linking target app/dpdk-test-acl 00:02:04.136 [687/707] Linking target app/dpdk-test-dma-perf 00:02:04.136 [688/707] Linking target app/dpdk-test-regex 00:02:04.136 [689/707] Linking target app/dpdk-graph 00:02:04.136 [690/707] Linking target app/dpdk-test-crypto-perf 00:02:04.136 [691/707] Linking target app/dpdk-test-cmdline 00:02:04.136 [692/707] Linking target app/dpdk-test-gpudev 00:02:04.136 [693/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:04.136 [694/707] Linking target app/dpdk-test-bbdev 00:02:04.136 [695/707] Linking target app/dpdk-dumpcap 00:02:04.136 [696/707] Linking target app/dpdk-test-fib 00:02:04.136 [697/707] Linking target app/dpdk-test-pipeline 00:02:04.136 [698/707] Linking target app/dpdk-test-compress-perf 00:02:04.136 [699/707] Linking target app/dpdk-test-security-perf 00:02:04.136 [700/707] Linking target app/dpdk-test-mldev 00:02:04.136 [701/707] Linking target app/dpdk-test-sad 00:02:04.136 [702/707] Linking target app/dpdk-test-eventdev 00:02:04.394 [703/707] Linking target app/dpdk-testpmd 00:02:05.774 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.033 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:09.328 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.328 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:09.328 20:48:59 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:09.328 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:09.328 [0/1] Installing files. 00:02:09.328 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.328 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.329 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:09.330 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.331 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:09.332 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:09.333 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:09.333 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.333 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.333 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.333 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.333 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:09.334 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:09.334 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:09.334 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.334 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:09.334 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.334 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.596 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.597 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.598 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.599 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:09.600 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:09.600 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:09.600 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so 00:02:09.600 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:09.600 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:09.600 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:09.600 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:09.600 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:09.600 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:09.600 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:09.600 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:09.600 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:09.600 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:09.600 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:09.600 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:09.600 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:09.600 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:09.600 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:09.600 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:09.600 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:09.600 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:09.600 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:09.600 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:09.600 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:09.600 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:09.600 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:09.600 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:09.600 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:09.600 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:09.600 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:09.600 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:09.600 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:09.600 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:09.600 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:09.600 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:09.600 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:09.600 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:09.600 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:09.600 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:09.600 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:09.600 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:09.600 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:09.600 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:09.600 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:09.600 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:09.600 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:09.600 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:09.600 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:09.600 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:09.600 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:09.600 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:09.600 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:09.600 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:09.600 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:09.600 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:09.601 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:09.601 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:09.601 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:09.601 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:09.601 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:09.601 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:09.601 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:09.601 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:09.601 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:09.601 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:09.601 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:09.601 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:09.601 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:09.601 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:09.601 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:09.601 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:09.601 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:09.601 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:09.601 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:09.601 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:09.601 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:09.601 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:09.601 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:09.601 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:09.601 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:09.601 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:09.601 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:09.601 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:09.601 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:09.601 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:09.601 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:09.601 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:09.601 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:09.601 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:09.601 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:09.601 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:09.601 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:09.601 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:09.601 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:09.601 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:09.601 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:09.601 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:09.601 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:09.601 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:09.601 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:09.601 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:09.601 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:09.601 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:09.601 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:09.601 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:09.601 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:09.601 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:09.601 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:09.601 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:09.601 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:09.601 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:09.601 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:09.601 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:09.601 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:09.601 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:09.601 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:09.601 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:09.601 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:09.601 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:09.601 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:09.601 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:09.601 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:09.601 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:09.601 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:09.601 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:09.601 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:09.601 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:09.601 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:09.601 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:09.601 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:09.601 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:09.601 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:09.601 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:09.601 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:09.601 20:49:00 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:09.601 20:49:00 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:09.601 20:49:00 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:09.601 20:49:00 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:09.601 00:02:09.601 real 0m27.526s 00:02:09.601 user 8m3.003s 00:02:09.601 sys 2m32.777s 00:02:09.601 20:49:00 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:09.601 20:49:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:09.601 ************************************ 00:02:09.601 END TEST build_native_dpdk 00:02:09.601 ************************************ 00:02:09.601 20:49:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.601 20:49:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.601 20:49:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.601 20:49:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.601 20:49:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:09.601 20:49:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:09.601 20:49:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:09.601 20:49:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:09.861 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:09.861 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:09.861 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:09.861 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:10.429 Using 'verbs' RDMA provider 00:02:25.904 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:38.118 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:38.118 Creating mk/config.mk...done. 00:02:38.118 Creating mk/cc.flags.mk...done. 00:02:38.118 Type 'make' to build. 00:02:38.118 20:49:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:38.118 20:49:28 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:38.118 20:49:28 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:38.118 20:49:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.118 ************************************ 00:02:38.118 START TEST make 00:02:38.118 ************************************ 00:02:38.118 20:49:28 make -- common/autotest_common.sh@1121 -- $ make -j112 00:02:38.118 make[1]: Nothing to be done for 'all'. 00:02:48.101 CC lib/ut/ut.o 00:02:48.101 CC lib/log/log.o 00:02:48.101 CC lib/log/log_deprecated.o 00:02:48.101 CC lib/log/log_flags.o 00:02:48.101 CC lib/ut_mock/mock.o 00:02:48.101 LIB libspdk_ut.a 00:02:48.101 SO libspdk_ut.so.2.0 00:02:48.101 LIB libspdk_log.a 00:02:48.101 LIB libspdk_ut_mock.a 00:02:48.101 SYMLINK libspdk_ut.so 00:02:48.101 SO libspdk_log.so.7.0 00:02:48.101 SO libspdk_ut_mock.so.6.0 00:02:48.101 SYMLINK libspdk_log.so 00:02:48.101 SYMLINK libspdk_ut_mock.so 00:02:48.361 CC lib/dma/dma.o 00:02:48.361 CC lib/ioat/ioat.o 00:02:48.361 CXX lib/trace_parser/trace.o 00:02:48.361 CC lib/util/base64.o 00:02:48.361 CC lib/util/bit_array.o 00:02:48.361 CC lib/util/cpuset.o 00:02:48.361 CC lib/util/crc16.o 00:02:48.361 CC lib/util/crc32_ieee.o 00:02:48.361 CC lib/util/crc32.o 00:02:48.361 CC lib/util/crc32c.o 00:02:48.361 CC lib/util/crc64.o 00:02:48.361 CC lib/util/dif.o 00:02:48.361 CC lib/util/fd.o 00:02:48.361 CC lib/util/file.o 00:02:48.361 CC lib/util/hexlify.o 00:02:48.361 CC lib/util/iov.o 00:02:48.361 CC lib/util/math.o 00:02:48.361 CC lib/util/pipe.o 00:02:48.361 CC lib/util/strerror_tls.o 00:02:48.361 CC lib/util/string.o 00:02:48.361 CC lib/util/uuid.o 00:02:48.361 CC lib/util/xor.o 00:02:48.361 CC lib/util/fd_group.o 00:02:48.361 CC lib/util/zipf.o 00:02:48.620 LIB libspdk_dma.a 00:02:48.620 CC lib/vfio_user/host/vfio_user.o 00:02:48.620 CC lib/vfio_user/host/vfio_user_pci.o 00:02:48.620 SO libspdk_dma.so.4.0 00:02:48.620 LIB libspdk_ioat.a 00:02:48.620 SYMLINK libspdk_dma.so 00:02:48.620 SO libspdk_ioat.so.7.0 00:02:48.620 SYMLINK libspdk_ioat.so 00:02:48.620 LIB libspdk_vfio_user.a 00:02:48.879 SO libspdk_vfio_user.so.5.0 00:02:48.879 LIB libspdk_util.a 00:02:48.879 SYMLINK libspdk_vfio_user.so 00:02:48.879 SO libspdk_util.so.9.0 00:02:48.879 SYMLINK libspdk_util.so 00:02:48.879 LIB libspdk_trace_parser.a 00:02:49.138 SO libspdk_trace_parser.so.5.0 00:02:49.138 SYMLINK libspdk_trace_parser.so 00:02:49.396 CC lib/vmd/vmd.o 00:02:49.396 CC lib/vmd/led.o 00:02:49.396 CC lib/env_dpdk/memory.o 00:02:49.396 CC lib/env_dpdk/env.o 00:02:49.396 CC lib/env_dpdk/pci.o 00:02:49.396 CC lib/env_dpdk/threads.o 00:02:49.396 CC lib/env_dpdk/init.o 00:02:49.396 CC lib/env_dpdk/pci_ioat.o 00:02:49.396 CC lib/env_dpdk/pci_virtio.o 00:02:49.396 CC lib/env_dpdk/pci_event.o 00:02:49.396 CC lib/env_dpdk/pci_vmd.o 00:02:49.396 CC lib/rdma/common.o 00:02:49.396 CC lib/env_dpdk/pci_idxd.o 00:02:49.396 CC lib/rdma/rdma_verbs.o 00:02:49.396 CC lib/env_dpdk/sigbus_handler.o 00:02:49.396 CC lib/env_dpdk/pci_dpdk.o 00:02:49.396 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.396 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:49.396 CC lib/json/json_util.o 00:02:49.396 CC lib/json/json_parse.o 00:02:49.396 CC lib/json/json_write.o 00:02:49.396 CC lib/idxd/idxd.o 00:02:49.396 CC lib/conf/conf.o 00:02:49.396 CC lib/idxd/idxd_user.o 00:02:49.396 CC lib/idxd/idxd_kernel.o 00:02:49.655 LIB libspdk_conf.a 00:02:49.655 LIB libspdk_json.a 00:02:49.655 LIB libspdk_rdma.a 00:02:49.655 SO libspdk_conf.so.6.0 00:02:49.655 SO libspdk_json.so.6.0 00:02:49.655 SO libspdk_rdma.so.6.0 00:02:49.655 SYMLINK libspdk_conf.so 00:02:49.655 SYMLINK libspdk_json.so 00:02:49.655 SYMLINK libspdk_rdma.so 00:02:49.913 LIB libspdk_idxd.a 00:02:49.913 LIB libspdk_vmd.a 00:02:49.913 SO libspdk_vmd.so.6.0 00:02:49.913 SO libspdk_idxd.so.12.0 00:02:49.914 SYMLINK libspdk_vmd.so 00:02:49.914 SYMLINK libspdk_idxd.so 00:02:50.171 CC lib/jsonrpc/jsonrpc_server.o 00:02:50.171 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:50.171 CC lib/jsonrpc/jsonrpc_client.o 00:02:50.171 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:50.171 LIB libspdk_jsonrpc.a 00:02:50.429 LIB libspdk_env_dpdk.a 00:02:50.429 SO libspdk_jsonrpc.so.6.0 00:02:50.429 SO libspdk_env_dpdk.so.14.0 00:02:50.429 SYMLINK libspdk_jsonrpc.so 00:02:50.429 SYMLINK libspdk_env_dpdk.so 00:02:50.686 CC lib/rpc/rpc.o 00:02:50.943 LIB libspdk_rpc.a 00:02:50.943 SO libspdk_rpc.so.6.0 00:02:50.943 SYMLINK libspdk_rpc.so 00:02:51.202 CC lib/keyring/keyring.o 00:02:51.202 CC lib/keyring/keyring_rpc.o 00:02:51.460 CC lib/trace/trace.o 00:02:51.460 CC lib/trace/trace_flags.o 00:02:51.460 CC lib/trace/trace_rpc.o 00:02:51.460 CC lib/notify/notify.o 00:02:51.460 CC lib/notify/notify_rpc.o 00:02:51.460 LIB libspdk_keyring.a 00:02:51.460 LIB libspdk_notify.a 00:02:51.460 SO libspdk_keyring.so.1.0 00:02:51.460 SO libspdk_notify.so.6.0 00:02:51.460 LIB libspdk_trace.a 00:02:51.719 SYMLINK libspdk_keyring.so 00:02:51.719 SO libspdk_trace.so.10.0 00:02:51.719 SYMLINK libspdk_notify.so 00:02:51.719 SYMLINK libspdk_trace.so 00:02:51.978 CC lib/sock/sock.o 00:02:51.978 CC lib/sock/sock_rpc.o 00:02:51.978 CC lib/thread/thread.o 00:02:51.978 CC lib/thread/iobuf.o 00:02:52.236 LIB libspdk_sock.a 00:02:52.236 SO libspdk_sock.so.9.0 00:02:52.493 SYMLINK libspdk_sock.so 00:02:52.751 CC lib/nvme/nvme_ctrlr.o 00:02:52.751 CC lib/nvme/nvme_fabric.o 00:02:52.751 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.751 CC lib/nvme/nvme_ns_cmd.o 00:02:52.751 CC lib/nvme/nvme_ns.o 00:02:52.751 CC lib/nvme/nvme_qpair.o 00:02:52.751 CC lib/nvme/nvme_pcie_common.o 00:02:52.751 CC lib/nvme/nvme_pcie.o 00:02:52.751 CC lib/nvme/nvme_transport.o 00:02:52.751 CC lib/nvme/nvme.o 00:02:52.751 CC lib/nvme/nvme_quirks.o 00:02:52.751 CC lib/nvme/nvme_discovery.o 00:02:52.751 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.751 CC lib/nvme/nvme_tcp.o 00:02:52.751 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.751 CC lib/nvme/nvme_opal.o 00:02:52.751 CC lib/nvme/nvme_io_msg.o 00:02:52.751 CC lib/nvme/nvme_poll_group.o 00:02:52.751 CC lib/nvme/nvme_auth.o 00:02:52.751 CC lib/nvme/nvme_zns.o 00:02:52.751 CC lib/nvme/nvme_stubs.o 00:02:52.751 CC lib/nvme/nvme_cuse.o 00:02:52.751 CC lib/nvme/nvme_rdma.o 00:02:53.009 LIB libspdk_thread.a 00:02:53.009 SO libspdk_thread.so.10.0 00:02:53.266 SYMLINK libspdk_thread.so 00:02:53.525 CC lib/init/json_config.o 00:02:53.525 CC lib/virtio/virtio.o 00:02:53.525 CC lib/init/subsystem.o 00:02:53.525 CC lib/init/subsystem_rpc.o 00:02:53.525 CC lib/init/rpc.o 00:02:53.525 CC lib/virtio/virtio_vhost_user.o 00:02:53.525 CC lib/virtio/virtio_vfio_user.o 00:02:53.525 CC lib/virtio/virtio_pci.o 00:02:53.525 CC lib/accel/accel.o 00:02:53.525 CC lib/accel/accel_rpc.o 00:02:53.525 CC lib/accel/accel_sw.o 00:02:53.525 CC lib/blob/blobstore.o 00:02:53.525 CC lib/blob/request.o 00:02:53.525 CC lib/blob/zeroes.o 00:02:53.525 CC lib/blob/blob_bs_dev.o 00:02:53.783 LIB libspdk_init.a 00:02:53.783 SO libspdk_init.so.5.0 00:02:53.783 LIB libspdk_virtio.a 00:02:53.783 SO libspdk_virtio.so.7.0 00:02:53.783 SYMLINK libspdk_init.so 00:02:54.042 SYMLINK libspdk_virtio.so 00:02:54.300 CC lib/event/app.o 00:02:54.300 CC lib/event/reactor.o 00:02:54.300 CC lib/event/log_rpc.o 00:02:54.300 CC lib/event/app_rpc.o 00:02:54.300 CC lib/event/scheduler_static.o 00:02:54.300 LIB libspdk_accel.a 00:02:54.300 LIB libspdk_nvme.a 00:02:54.300 SO libspdk_accel.so.15.0 00:02:54.300 SYMLINK libspdk_accel.so 00:02:54.300 SO libspdk_nvme.so.13.0 00:02:54.633 LIB libspdk_event.a 00:02:54.633 SO libspdk_event.so.13.0 00:02:54.633 SYMLINK libspdk_event.so 00:02:54.633 SYMLINK libspdk_nvme.so 00:02:54.633 CC lib/bdev/bdev.o 00:02:54.633 CC lib/bdev/bdev_rpc.o 00:02:54.633 CC lib/bdev/bdev_zone.o 00:02:54.633 CC lib/bdev/part.o 00:02:54.633 CC lib/bdev/scsi_nvme.o 00:02:55.570 LIB libspdk_blob.a 00:02:55.570 SO libspdk_blob.so.11.0 00:02:55.829 SYMLINK libspdk_blob.so 00:02:56.088 CC lib/lvol/lvol.o 00:02:56.088 CC lib/blobfs/blobfs.o 00:02:56.088 CC lib/blobfs/tree.o 00:02:56.347 LIB libspdk_bdev.a 00:02:56.607 SO libspdk_bdev.so.15.0 00:02:56.607 SYMLINK libspdk_bdev.so 00:02:56.607 LIB libspdk_blobfs.a 00:02:56.607 SO libspdk_blobfs.so.10.0 00:02:56.607 LIB libspdk_lvol.a 00:02:56.866 SO libspdk_lvol.so.10.0 00:02:56.866 SYMLINK libspdk_blobfs.so 00:02:56.866 SYMLINK libspdk_lvol.so 00:02:56.866 CC lib/ftl/ftl_core.o 00:02:56.866 CC lib/ftl/ftl_init.o 00:02:56.866 CC lib/ftl/ftl_layout.o 00:02:56.866 CC lib/ftl/ftl_debug.o 00:02:56.866 CC lib/ftl/ftl_l2p.o 00:02:56.866 CC lib/ftl/ftl_io.o 00:02:56.866 CC lib/ftl/ftl_sb.o 00:02:56.866 CC lib/ftl/ftl_l2p_flat.o 00:02:56.866 CC lib/ftl/ftl_nv_cache.o 00:02:56.866 CC lib/ftl/ftl_band.o 00:02:56.866 CC lib/ftl/ftl_reloc.o 00:02:56.866 CC lib/ftl/ftl_band_ops.o 00:02:56.866 CC lib/ftl/ftl_writer.o 00:02:56.866 CC lib/ftl/ftl_rq.o 00:02:56.866 CC lib/ftl/ftl_l2p_cache.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt.o 00:02:56.866 CC lib/ftl/ftl_p2l.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:56.866 CC lib/scsi/dev.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:56.866 CC lib/scsi/lun.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:56.866 CC lib/scsi/port.o 00:02:56.866 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:56.866 CC lib/scsi/scsi.o 00:02:56.866 CC lib/ftl/utils/ftl_conf.o 00:02:56.866 CC lib/scsi/task.o 00:02:56.866 CC lib/scsi/scsi_pr.o 00:02:56.866 CC lib/ftl/utils/ftl_md.o 00:02:56.866 CC lib/ftl/utils/ftl_mempool.o 00:02:56.866 CC lib/scsi/scsi_rpc.o 00:02:56.866 CC lib/scsi/scsi_bdev.o 00:02:56.866 CC lib/ftl/utils/ftl_bitmap.o 00:02:56.866 CC lib/ftl/utils/ftl_property.o 00:02:56.866 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:56.866 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:56.866 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:56.866 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:56.866 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:56.866 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:56.866 CC lib/nbd/nbd.o 00:02:56.867 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:56.867 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:56.867 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:56.867 CC lib/nbd/nbd_rpc.o 00:02:56.867 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:56.867 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:56.867 CC lib/nvmf/ctrlr_discovery.o 00:02:56.867 CC lib/nvmf/ctrlr.o 00:02:56.867 CC lib/ftl/base/ftl_base_bdev.o 00:02:57.125 CC lib/nvmf/ctrlr_bdev.o 00:02:57.125 CC lib/ftl/ftl_trace.o 00:02:57.125 CC lib/ublk/ublk_rpc.o 00:02:57.125 CC lib/ftl/base/ftl_base_dev.o 00:02:57.125 CC lib/ublk/ublk.o 00:02:57.125 CC lib/nvmf/nvmf_rpc.o 00:02:57.125 CC lib/nvmf/subsystem.o 00:02:57.125 CC lib/nvmf/nvmf.o 00:02:57.125 CC lib/nvmf/transport.o 00:02:57.125 CC lib/nvmf/tcp.o 00:02:57.125 CC lib/nvmf/stubs.o 00:02:57.125 CC lib/nvmf/rdma.o 00:02:57.125 CC lib/nvmf/mdns_server.o 00:02:57.125 CC lib/nvmf/auth.o 00:02:57.384 LIB libspdk_nbd.a 00:02:57.384 SO libspdk_nbd.so.7.0 00:02:57.643 LIB libspdk_scsi.a 00:02:57.643 SYMLINK libspdk_nbd.so 00:02:57.643 SO libspdk_scsi.so.9.0 00:02:57.643 SYMLINK libspdk_scsi.so 00:02:57.643 LIB libspdk_ublk.a 00:02:57.643 SO libspdk_ublk.so.3.0 00:02:57.902 SYMLINK libspdk_ublk.so 00:02:57.902 LIB libspdk_ftl.a 00:02:57.902 CC lib/iscsi/conn.o 00:02:57.902 CC lib/iscsi/init_grp.o 00:02:57.902 CC lib/iscsi/iscsi.o 00:02:57.902 CC lib/vhost/vhost.o 00:02:57.902 CC lib/iscsi/md5.o 00:02:57.902 CC lib/vhost/vhost_rpc.o 00:02:57.902 CC lib/iscsi/param.o 00:02:57.902 CC lib/vhost/vhost_scsi.o 00:02:57.902 CC lib/iscsi/portal_grp.o 00:02:57.902 CC lib/iscsi/tgt_node.o 00:02:57.902 CC lib/vhost/vhost_blk.o 00:02:57.902 CC lib/vhost/rte_vhost_user.o 00:02:57.902 CC lib/iscsi/iscsi_subsystem.o 00:02:57.902 SO libspdk_ftl.so.9.0 00:02:57.902 CC lib/iscsi/iscsi_rpc.o 00:02:57.902 CC lib/iscsi/task.o 00:02:58.469 SYMLINK libspdk_ftl.so 00:02:58.728 LIB libspdk_nvmf.a 00:02:58.728 SO libspdk_nvmf.so.18.0 00:02:58.728 LIB libspdk_vhost.a 00:02:58.728 SO libspdk_vhost.so.8.0 00:02:58.987 SYMLINK libspdk_nvmf.so 00:02:58.987 SYMLINK libspdk_vhost.so 00:02:58.987 LIB libspdk_iscsi.a 00:02:58.987 SO libspdk_iscsi.so.8.0 00:02:59.247 SYMLINK libspdk_iscsi.so 00:02:59.815 CC module/env_dpdk/env_dpdk_rpc.o 00:02:59.815 CC module/sock/posix/posix.o 00:02:59.815 LIB libspdk_env_dpdk_rpc.a 00:02:59.815 CC module/accel/iaa/accel_iaa.o 00:02:59.815 CC module/accel/iaa/accel_iaa_rpc.o 00:02:59.815 CC module/accel/error/accel_error_rpc.o 00:02:59.815 CC module/accel/error/accel_error.o 00:02:59.815 CC module/accel/dsa/accel_dsa_rpc.o 00:02:59.815 CC module/accel/dsa/accel_dsa.o 00:02:59.815 CC module/scheduler/gscheduler/gscheduler.o 00:02:59.815 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:59.815 CC module/keyring/file/keyring.o 00:02:59.815 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:59.815 CC module/keyring/file/keyring_rpc.o 00:02:59.815 CC module/accel/ioat/accel_ioat_rpc.o 00:02:59.815 CC module/keyring/linux/keyring.o 00:02:59.815 CC module/blob/bdev/blob_bdev.o 00:02:59.815 CC module/accel/ioat/accel_ioat.o 00:02:59.815 CC module/keyring/linux/keyring_rpc.o 00:02:59.815 SO libspdk_env_dpdk_rpc.so.6.0 00:03:00.074 SYMLINK libspdk_env_dpdk_rpc.so 00:03:00.074 LIB libspdk_scheduler_dpdk_governor.a 00:03:00.074 LIB libspdk_scheduler_gscheduler.a 00:03:00.074 LIB libspdk_keyring_linux.a 00:03:00.074 LIB libspdk_keyring_file.a 00:03:00.074 LIB libspdk_accel_error.a 00:03:00.074 LIB libspdk_accel_iaa.a 00:03:00.074 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:00.074 LIB libspdk_scheduler_dynamic.a 00:03:00.074 SO libspdk_scheduler_gscheduler.so.4.0 00:03:00.074 SO libspdk_keyring_file.so.1.0 00:03:00.074 SO libspdk_keyring_linux.so.1.0 00:03:00.074 SO libspdk_accel_error.so.2.0 00:03:00.074 LIB libspdk_accel_ioat.a 00:03:00.074 SO libspdk_accel_iaa.so.3.0 00:03:00.074 SO libspdk_scheduler_dynamic.so.4.0 00:03:00.074 LIB libspdk_accel_dsa.a 00:03:00.074 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:00.074 SYMLINK libspdk_scheduler_gscheduler.so 00:03:00.074 SO libspdk_accel_ioat.so.6.0 00:03:00.074 LIB libspdk_blob_bdev.a 00:03:00.074 SYMLINK libspdk_keyring_file.so 00:03:00.074 SYMLINK libspdk_accel_error.so 00:03:00.074 SYMLINK libspdk_keyring_linux.so 00:03:00.074 SYMLINK libspdk_accel_iaa.so 00:03:00.074 SYMLINK libspdk_scheduler_dynamic.so 00:03:00.074 SO libspdk_accel_dsa.so.5.0 00:03:00.074 SO libspdk_blob_bdev.so.11.0 00:03:00.332 SYMLINK libspdk_accel_ioat.so 00:03:00.332 SYMLINK libspdk_accel_dsa.so 00:03:00.332 SYMLINK libspdk_blob_bdev.so 00:03:00.332 LIB libspdk_sock_posix.a 00:03:00.332 SO libspdk_sock_posix.so.6.0 00:03:00.591 SYMLINK libspdk_sock_posix.so 00:03:00.850 CC module/bdev/null/bdev_null.o 00:03:00.850 CC module/bdev/aio/bdev_aio.o 00:03:00.850 CC module/bdev/null/bdev_null_rpc.o 00:03:00.850 CC module/bdev/aio/bdev_aio_rpc.o 00:03:00.850 CC module/bdev/ftl/bdev_ftl.o 00:03:00.850 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:00.850 CC module/blobfs/bdev/blobfs_bdev.o 00:03:00.850 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:00.850 CC module/bdev/gpt/gpt.o 00:03:00.850 CC module/bdev/lvol/vbdev_lvol.o 00:03:00.850 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:00.850 CC module/bdev/gpt/vbdev_gpt.o 00:03:00.850 CC module/bdev/malloc/bdev_malloc.o 00:03:00.850 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:00.850 CC module/bdev/delay/vbdev_delay.o 00:03:00.850 CC module/bdev/raid/bdev_raid.o 00:03:00.850 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:00.850 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.850 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.850 CC module/bdev/passthru/vbdev_passthru.o 00:03:00.850 CC module/bdev/nvme/bdev_nvme.o 00:03:00.850 CC module/bdev/nvme/nvme_rpc.o 00:03:00.850 CC module/bdev/raid/raid1.o 00:03:00.850 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:00.850 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:00.850 CC module/bdev/split/vbdev_split.o 00:03:00.850 CC module/bdev/raid/raid0.o 00:03:00.850 CC module/bdev/nvme/vbdev_opal.o 00:03:00.850 CC module/bdev/nvme/bdev_mdns_client.o 00:03:00.850 CC module/bdev/raid/concat.o 00:03:00.850 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.850 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:00.850 CC module/bdev/error/vbdev_error.o 00:03:00.850 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:00.850 CC module/bdev/error/vbdev_error_rpc.o 00:03:00.850 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:00.850 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:00.850 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:00.850 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:00.850 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:00.850 CC module/bdev/iscsi/bdev_iscsi.o 00:03:00.850 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:01.109 LIB libspdk_bdev_gpt.a 00:03:01.109 LIB libspdk_bdev_split.a 00:03:01.109 LIB libspdk_bdev_null.a 00:03:01.109 LIB libspdk_blobfs_bdev.a 00:03:01.109 LIB libspdk_bdev_ftl.a 00:03:01.109 SO libspdk_bdev_null.so.6.0 00:03:01.109 SO libspdk_bdev_gpt.so.6.0 00:03:01.109 SO libspdk_bdev_split.so.6.0 00:03:01.109 LIB libspdk_bdev_error.a 00:03:01.109 SO libspdk_bdev_ftl.so.6.0 00:03:01.109 LIB libspdk_bdev_aio.a 00:03:01.109 LIB libspdk_bdev_passthru.a 00:03:01.109 SO libspdk_blobfs_bdev.so.6.0 00:03:01.109 LIB libspdk_bdev_malloc.a 00:03:01.109 SYMLINK libspdk_bdev_null.so 00:03:01.109 SO libspdk_bdev_error.so.6.0 00:03:01.109 LIB libspdk_bdev_zone_block.a 00:03:01.109 SO libspdk_bdev_passthru.so.6.0 00:03:01.109 SO libspdk_bdev_aio.so.6.0 00:03:01.109 SYMLINK libspdk_bdev_split.so 00:03:01.109 SYMLINK libspdk_bdev_ftl.so 00:03:01.109 SYMLINK libspdk_bdev_gpt.so 00:03:01.109 LIB libspdk_bdev_delay.a 00:03:01.109 LIB libspdk_bdev_iscsi.a 00:03:01.109 SO libspdk_bdev_malloc.so.6.0 00:03:01.109 SYMLINK libspdk_blobfs_bdev.so 00:03:01.109 SO libspdk_bdev_zone_block.so.6.0 00:03:01.109 SYMLINK libspdk_bdev_passthru.so 00:03:01.109 SYMLINK libspdk_bdev_error.so 00:03:01.109 SO libspdk_bdev_delay.so.6.0 00:03:01.368 SYMLINK libspdk_bdev_aio.so 00:03:01.368 SO libspdk_bdev_iscsi.so.6.0 00:03:01.368 LIB libspdk_bdev_lvol.a 00:03:01.368 SYMLINK libspdk_bdev_malloc.so 00:03:01.368 SYMLINK libspdk_bdev_zone_block.so 00:03:01.368 SO libspdk_bdev_lvol.so.6.0 00:03:01.368 SYMLINK libspdk_bdev_delay.so 00:03:01.368 LIB libspdk_bdev_virtio.a 00:03:01.368 SYMLINK libspdk_bdev_iscsi.so 00:03:01.368 SO libspdk_bdev_virtio.so.6.0 00:03:01.368 SYMLINK libspdk_bdev_lvol.so 00:03:01.368 SYMLINK libspdk_bdev_virtio.so 00:03:01.626 LIB libspdk_bdev_raid.a 00:03:01.626 SO libspdk_bdev_raid.so.6.0 00:03:01.626 SYMLINK libspdk_bdev_raid.so 00:03:02.561 LIB libspdk_bdev_nvme.a 00:03:02.561 SO libspdk_bdev_nvme.so.7.0 00:03:02.561 SYMLINK libspdk_bdev_nvme.so 00:03:03.498 CC module/event/subsystems/scheduler/scheduler.o 00:03:03.499 CC module/event/subsystems/sock/sock.o 00:03:03.499 CC module/event/subsystems/iobuf/iobuf.o 00:03:03.499 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:03.499 CC module/event/subsystems/vmd/vmd.o 00:03:03.499 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:03.499 CC module/event/subsystems/keyring/keyring.o 00:03:03.499 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:03.499 LIB libspdk_event_scheduler.a 00:03:03.499 LIB libspdk_event_sock.a 00:03:03.499 LIB libspdk_event_iobuf.a 00:03:03.499 LIB libspdk_event_keyring.a 00:03:03.499 SO libspdk_event_scheduler.so.4.0 00:03:03.499 LIB libspdk_event_vhost_blk.a 00:03:03.499 LIB libspdk_event_vmd.a 00:03:03.499 SO libspdk_event_sock.so.5.0 00:03:03.499 SO libspdk_event_iobuf.so.3.0 00:03:03.499 SO libspdk_event_vhost_blk.so.3.0 00:03:03.499 SO libspdk_event_keyring.so.1.0 00:03:03.499 SO libspdk_event_vmd.so.6.0 00:03:03.499 SYMLINK libspdk_event_scheduler.so 00:03:03.499 SYMLINK libspdk_event_sock.so 00:03:03.499 SYMLINK libspdk_event_iobuf.so 00:03:03.499 SYMLINK libspdk_event_vhost_blk.so 00:03:03.499 SYMLINK libspdk_event_keyring.so 00:03:03.499 SYMLINK libspdk_event_vmd.so 00:03:04.067 CC module/event/subsystems/accel/accel.o 00:03:04.067 LIB libspdk_event_accel.a 00:03:04.067 SO libspdk_event_accel.so.6.0 00:03:04.067 SYMLINK libspdk_event_accel.so 00:03:04.636 CC module/event/subsystems/bdev/bdev.o 00:03:04.636 LIB libspdk_event_bdev.a 00:03:04.636 SO libspdk_event_bdev.so.6.0 00:03:04.894 SYMLINK libspdk_event_bdev.so 00:03:05.153 CC module/event/subsystems/nbd/nbd.o 00:03:05.153 CC module/event/subsystems/scsi/scsi.o 00:03:05.153 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:05.153 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:05.153 CC module/event/subsystems/ublk/ublk.o 00:03:05.153 LIB libspdk_event_nbd.a 00:03:05.153 LIB libspdk_event_scsi.a 00:03:05.412 SO libspdk_event_nbd.so.6.0 00:03:05.412 LIB libspdk_event_ublk.a 00:03:05.412 SO libspdk_event_scsi.so.6.0 00:03:05.412 LIB libspdk_event_nvmf.a 00:03:05.412 SO libspdk_event_ublk.so.3.0 00:03:05.412 SYMLINK libspdk_event_nbd.so 00:03:05.412 SO libspdk_event_nvmf.so.6.0 00:03:05.412 SYMLINK libspdk_event_scsi.so 00:03:05.412 SYMLINK libspdk_event_ublk.so 00:03:05.412 SYMLINK libspdk_event_nvmf.so 00:03:05.670 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:05.670 CC module/event/subsystems/iscsi/iscsi.o 00:03:05.929 LIB libspdk_event_vhost_scsi.a 00:03:05.929 SO libspdk_event_vhost_scsi.so.3.0 00:03:05.929 LIB libspdk_event_iscsi.a 00:03:05.929 SYMLINK libspdk_event_vhost_scsi.so 00:03:05.929 SO libspdk_event_iscsi.so.6.0 00:03:05.929 SYMLINK libspdk_event_iscsi.so 00:03:06.188 SO libspdk.so.6.0 00:03:06.188 SYMLINK libspdk.so 00:03:06.446 CC test/rpc_client/rpc_client_test.o 00:03:06.446 CC app/trace_record/trace_record.o 00:03:06.446 CC app/spdk_nvme_identify/identify.o 00:03:06.712 CC app/spdk_lspci/spdk_lspci.o 00:03:06.712 CXX app/trace/trace.o 00:03:06.712 TEST_HEADER include/spdk/accel.h 00:03:06.712 TEST_HEADER include/spdk/accel_module.h 00:03:06.712 CC app/spdk_top/spdk_top.o 00:03:06.712 TEST_HEADER include/spdk/base64.h 00:03:06.712 TEST_HEADER include/spdk/assert.h 00:03:06.712 TEST_HEADER include/spdk/barrier.h 00:03:06.712 TEST_HEADER include/spdk/bdev.h 00:03:06.712 TEST_HEADER include/spdk/bdev_module.h 00:03:06.712 TEST_HEADER include/spdk/bdev_zone.h 00:03:06.712 TEST_HEADER include/spdk/bit_array.h 00:03:06.712 TEST_HEADER include/spdk/bit_pool.h 00:03:06.712 CC app/spdk_nvme_discover/discovery_aer.o 00:03:06.712 TEST_HEADER include/spdk/blob_bdev.h 00:03:06.712 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:06.712 TEST_HEADER include/spdk/blobfs.h 00:03:06.712 TEST_HEADER include/spdk/conf.h 00:03:06.712 TEST_HEADER include/spdk/blob.h 00:03:06.712 CC app/spdk_nvme_perf/perf.o 00:03:06.712 TEST_HEADER include/spdk/config.h 00:03:06.712 TEST_HEADER include/spdk/cpuset.h 00:03:06.712 TEST_HEADER include/spdk/crc16.h 00:03:06.712 TEST_HEADER include/spdk/crc32.h 00:03:06.712 TEST_HEADER include/spdk/crc64.h 00:03:06.712 TEST_HEADER include/spdk/dma.h 00:03:06.712 TEST_HEADER include/spdk/dif.h 00:03:06.712 TEST_HEADER include/spdk/endian.h 00:03:06.712 TEST_HEADER include/spdk/env_dpdk.h 00:03:06.712 TEST_HEADER include/spdk/event.h 00:03:06.712 TEST_HEADER include/spdk/env.h 00:03:06.712 TEST_HEADER include/spdk/fd_group.h 00:03:06.712 TEST_HEADER include/spdk/fd.h 00:03:06.712 TEST_HEADER include/spdk/file.h 00:03:06.712 TEST_HEADER include/spdk/ftl.h 00:03:06.712 TEST_HEADER include/spdk/gpt_spec.h 00:03:06.712 TEST_HEADER include/spdk/hexlify.h 00:03:06.712 TEST_HEADER include/spdk/histogram_data.h 00:03:06.712 TEST_HEADER include/spdk/idxd.h 00:03:06.712 TEST_HEADER include/spdk/idxd_spec.h 00:03:06.712 TEST_HEADER include/spdk/init.h 00:03:06.712 TEST_HEADER include/spdk/ioat.h 00:03:06.712 TEST_HEADER include/spdk/ioat_spec.h 00:03:06.712 TEST_HEADER include/spdk/iscsi_spec.h 00:03:06.712 TEST_HEADER include/spdk/json.h 00:03:06.712 TEST_HEADER include/spdk/jsonrpc.h 00:03:06.712 TEST_HEADER include/spdk/keyring.h 00:03:06.712 CC app/spdk_dd/spdk_dd.o 00:03:06.712 TEST_HEADER include/spdk/likely.h 00:03:06.712 TEST_HEADER include/spdk/keyring_module.h 00:03:06.712 TEST_HEADER include/spdk/lvol.h 00:03:06.712 TEST_HEADER include/spdk/log.h 00:03:06.712 TEST_HEADER include/spdk/memory.h 00:03:06.712 TEST_HEADER include/spdk/mmio.h 00:03:06.712 TEST_HEADER include/spdk/nbd.h 00:03:06.712 TEST_HEADER include/spdk/notify.h 00:03:06.712 TEST_HEADER include/spdk/nvme.h 00:03:06.712 TEST_HEADER include/spdk/nvme_intel.h 00:03:06.712 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:06.712 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:06.712 TEST_HEADER include/spdk/nvme_spec.h 00:03:06.712 TEST_HEADER include/spdk/nvme_zns.h 00:03:06.712 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:06.712 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:06.712 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:06.712 TEST_HEADER include/spdk/nvmf.h 00:03:06.712 TEST_HEADER include/spdk/nvmf_spec.h 00:03:06.712 CC app/nvmf_tgt/nvmf_main.o 00:03:06.712 TEST_HEADER include/spdk/nvmf_transport.h 00:03:06.712 TEST_HEADER include/spdk/opal.h 00:03:06.712 TEST_HEADER include/spdk/opal_spec.h 00:03:06.712 TEST_HEADER include/spdk/pci_ids.h 00:03:06.712 CC app/vhost/vhost.o 00:03:06.712 TEST_HEADER include/spdk/pipe.h 00:03:06.712 TEST_HEADER include/spdk/queue.h 00:03:06.712 TEST_HEADER include/spdk/reduce.h 00:03:06.712 CC app/iscsi_tgt/iscsi_tgt.o 00:03:06.712 TEST_HEADER include/spdk/rpc.h 00:03:06.712 TEST_HEADER include/spdk/scheduler.h 00:03:06.713 TEST_HEADER include/spdk/scsi.h 00:03:06.713 TEST_HEADER include/spdk/scsi_spec.h 00:03:06.713 CC app/spdk_tgt/spdk_tgt.o 00:03:06.713 TEST_HEADER include/spdk/sock.h 00:03:06.713 TEST_HEADER include/spdk/stdinc.h 00:03:06.713 TEST_HEADER include/spdk/string.h 00:03:06.713 TEST_HEADER include/spdk/thread.h 00:03:06.713 TEST_HEADER include/spdk/trace.h 00:03:06.713 TEST_HEADER include/spdk/trace_parser.h 00:03:06.713 TEST_HEADER include/spdk/tree.h 00:03:06.713 TEST_HEADER include/spdk/util.h 00:03:06.713 TEST_HEADER include/spdk/ublk.h 00:03:06.713 TEST_HEADER include/spdk/uuid.h 00:03:06.713 TEST_HEADER include/spdk/version.h 00:03:06.713 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:06.713 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:06.713 TEST_HEADER include/spdk/vhost.h 00:03:06.713 TEST_HEADER include/spdk/vmd.h 00:03:06.713 TEST_HEADER include/spdk/xor.h 00:03:06.713 TEST_HEADER include/spdk/zipf.h 00:03:06.713 CXX test/cpp_headers/accel.o 00:03:06.713 CXX test/cpp_headers/accel_module.o 00:03:06.713 CXX test/cpp_headers/assert.o 00:03:06.713 CXX test/cpp_headers/barrier.o 00:03:06.713 CXX test/cpp_headers/bdev.o 00:03:06.713 CXX test/cpp_headers/base64.o 00:03:06.713 CXX test/cpp_headers/bdev_module.o 00:03:06.713 CXX test/cpp_headers/bdev_zone.o 00:03:06.713 CXX test/cpp_headers/bit_pool.o 00:03:06.713 CXX test/cpp_headers/bit_array.o 00:03:06.713 CXX test/cpp_headers/blob_bdev.o 00:03:06.713 CXX test/cpp_headers/blobfs.o 00:03:06.713 CXX test/cpp_headers/blobfs_bdev.o 00:03:06.713 CXX test/cpp_headers/blob.o 00:03:06.713 CXX test/cpp_headers/conf.o 00:03:06.713 CXX test/cpp_headers/config.o 00:03:06.713 CXX test/cpp_headers/cpuset.o 00:03:06.713 CXX test/cpp_headers/crc16.o 00:03:06.713 CXX test/cpp_headers/crc32.o 00:03:06.713 CXX test/cpp_headers/crc64.o 00:03:06.713 CXX test/cpp_headers/dif.o 00:03:06.713 CXX test/cpp_headers/dma.o 00:03:06.713 CXX test/cpp_headers/endian.o 00:03:06.713 CXX test/cpp_headers/env_dpdk.o 00:03:06.713 CXX test/cpp_headers/env.o 00:03:06.713 CXX test/cpp_headers/event.o 00:03:06.713 CXX test/cpp_headers/fd_group.o 00:03:06.713 CXX test/cpp_headers/file.o 00:03:06.713 CXX test/cpp_headers/fd.o 00:03:06.713 CXX test/cpp_headers/ftl.o 00:03:06.713 CXX test/cpp_headers/gpt_spec.o 00:03:06.713 CXX test/cpp_headers/hexlify.o 00:03:06.713 CXX test/cpp_headers/histogram_data.o 00:03:06.713 CXX test/cpp_headers/idxd.o 00:03:06.713 CXX test/cpp_headers/idxd_spec.o 00:03:06.713 CXX test/cpp_headers/init.o 00:03:06.713 CXX test/cpp_headers/ioat.o 00:03:06.713 CC test/app/jsoncat/jsoncat.o 00:03:06.713 CC test/app/histogram_perf/histogram_perf.o 00:03:06.713 CC test/app/stub/stub.o 00:03:06.713 CC test/nvme/reset/reset.o 00:03:06.713 CC test/event/event_perf/event_perf.o 00:03:06.713 CC test/event/reactor_perf/reactor_perf.o 00:03:06.713 CC test/nvme/e2edp/nvme_dp.o 00:03:06.713 CXX test/cpp_headers/ioat_spec.o 00:03:06.713 CC test/nvme/overhead/overhead.o 00:03:06.713 CC test/event/reactor/reactor.o 00:03:06.713 CC test/nvme/sgl/sgl.o 00:03:06.983 CC test/nvme/boot_partition/boot_partition.o 00:03:06.983 CC test/nvme/aer/aer.o 00:03:06.983 CC test/env/vtophys/vtophys.o 00:03:06.983 CC test/nvme/compliance/nvme_compliance.o 00:03:06.983 CC examples/sock/hello_world/hello_sock.o 00:03:06.983 CC test/nvme/simple_copy/simple_copy.o 00:03:06.983 CC test/thread/poller_perf/poller_perf.o 00:03:06.983 CC test/env/pci/pci_ut.o 00:03:06.983 CC test/nvme/startup/startup.o 00:03:06.983 CC test/nvme/connect_stress/connect_stress.o 00:03:06.983 CC test/nvme/err_injection/err_injection.o 00:03:06.983 CC examples/ioat/verify/verify.o 00:03:06.983 CC test/env/memory/memory_ut.o 00:03:06.983 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.983 CC test/nvme/reserve/reserve.o 00:03:06.983 CC examples/vmd/led/led.o 00:03:06.983 CC examples/idxd/perf/perf.o 00:03:06.983 CC examples/ioat/perf/perf.o 00:03:06.983 CC examples/util/zipf/zipf.o 00:03:06.983 CC examples/accel/perf/accel_perf.o 00:03:06.983 CC test/event/app_repeat/app_repeat.o 00:03:06.983 CC test/accel/dif/dif.o 00:03:06.983 CC test/nvme/fdp/fdp.o 00:03:06.983 CC test/bdev/bdevio/bdevio.o 00:03:06.983 CC test/app/bdev_svc/bdev_svc.o 00:03:06.983 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:06.983 CC test/dma/test_dma/test_dma.o 00:03:06.983 CC test/blobfs/mkfs/mkfs.o 00:03:06.983 CC test/nvme/cuse/cuse.o 00:03:06.983 CC examples/nvme/reconnect/reconnect.o 00:03:06.983 CC examples/vmd/lsvmd/lsvmd.o 00:03:06.983 CC app/fio/nvme/fio_plugin.o 00:03:06.983 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.983 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.983 CC test/event/scheduler/scheduler.o 00:03:06.983 CC examples/nvme/hello_world/hello_world.o 00:03:06.983 CC examples/nvme/abort/abort.o 00:03:06.983 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.983 CC examples/nvme/arbitration/arbitration.o 00:03:06.983 CC examples/nvme/hotplug/hotplug.o 00:03:06.983 CC examples/blob/hello_world/hello_blob.o 00:03:06.983 CC examples/thread/thread/thread_ex.o 00:03:06.983 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.983 CC examples/blob/cli/blobcli.o 00:03:06.983 CC examples/bdev/bdevperf/bdevperf.o 00:03:06.983 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.983 CC examples/nvmf/nvmf/nvmf.o 00:03:06.983 CC app/fio/bdev/fio_plugin.o 00:03:06.983 LINK spdk_lspci 00:03:07.250 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:07.250 CC test/lvol/esnap/esnap.o 00:03:07.250 CC test/env/mem_callbacks/mem_callbacks.o 00:03:07.250 LINK vhost 00:03:07.250 LINK rpc_client_test 00:03:07.250 LINK nvmf_tgt 00:03:07.250 LINK interrupt_tgt 00:03:07.250 LINK spdk_nvme_discover 00:03:07.250 LINK spdk_tgt 00:03:07.250 LINK reactor_perf 00:03:07.250 LINK jsoncat 00:03:07.250 LINK histogram_perf 00:03:07.250 LINK spdk_trace_record 00:03:07.250 LINK iscsi_tgt 00:03:07.518 LINK event_perf 00:03:07.518 LINK poller_perf 00:03:07.518 LINK app_repeat 00:03:07.518 LINK reactor 00:03:07.518 LINK vtophys 00:03:07.518 LINK startup 00:03:07.518 LINK zipf 00:03:07.518 LINK lsvmd 00:03:07.518 LINK stub 00:03:07.518 LINK led 00:03:07.518 LINK boot_partition 00:03:07.518 LINK env_dpdk_post_init 00:03:07.518 CXX test/cpp_headers/iscsi_spec.o 00:03:07.518 CXX test/cpp_headers/json.o 00:03:07.518 CXX test/cpp_headers/jsonrpc.o 00:03:07.518 LINK connect_stress 00:03:07.518 CXX test/cpp_headers/keyring.o 00:03:07.518 CXX test/cpp_headers/keyring_module.o 00:03:07.518 CXX test/cpp_headers/likely.o 00:03:07.518 CXX test/cpp_headers/log.o 00:03:07.518 CXX test/cpp_headers/lvol.o 00:03:07.518 CXX test/cpp_headers/memory.o 00:03:07.518 LINK bdev_svc 00:03:07.518 CXX test/cpp_headers/mmio.o 00:03:07.518 CXX test/cpp_headers/nbd.o 00:03:07.518 CXX test/cpp_headers/notify.o 00:03:07.518 CXX test/cpp_headers/nvme.o 00:03:07.518 CXX test/cpp_headers/nvme_intel.o 00:03:07.518 LINK err_injection 00:03:07.518 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.518 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:07.518 CXX test/cpp_headers/nvme_spec.o 00:03:07.518 CXX test/cpp_headers/nvme_zns.o 00:03:07.518 CXX test/cpp_headers/nvmf_cmd.o 00:03:07.518 LINK cmb_copy 00:03:07.518 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:07.518 CXX test/cpp_headers/nvmf.o 00:03:07.518 LINK doorbell_aers 00:03:07.518 CXX test/cpp_headers/nvmf_spec.o 00:03:07.518 LINK mkfs 00:03:07.518 CXX test/cpp_headers/nvmf_transport.o 00:03:07.518 LINK reserve 00:03:07.518 LINK fused_ordering 00:03:07.518 CXX test/cpp_headers/opal.o 00:03:07.518 CXX test/cpp_headers/opal_spec.o 00:03:07.518 CXX test/cpp_headers/pci_ids.o 00:03:07.518 LINK verify 00:03:07.518 CXX test/cpp_headers/pipe.o 00:03:07.518 CXX test/cpp_headers/queue.o 00:03:07.518 CXX test/cpp_headers/reduce.o 00:03:07.518 CXX test/cpp_headers/rpc.o 00:03:07.518 CXX test/cpp_headers/scheduler.o 00:03:07.518 LINK pmr_persistence 00:03:07.518 CXX test/cpp_headers/scsi.o 00:03:07.518 CXX test/cpp_headers/scsi_spec.o 00:03:07.518 CXX test/cpp_headers/sock.o 00:03:07.518 LINK ioat_perf 00:03:07.518 CXX test/cpp_headers/stdinc.o 00:03:07.518 LINK simple_copy 00:03:07.518 CXX test/cpp_headers/string.o 00:03:07.518 LINK hello_sock 00:03:07.518 LINK hotplug 00:03:07.518 LINK reset 00:03:07.518 LINK sgl 00:03:07.518 CXX test/cpp_headers/thread.o 00:03:07.518 CXX test/cpp_headers/trace.o 00:03:07.518 LINK hello_world 00:03:07.518 LINK nvme_dp 00:03:07.518 LINK scheduler 00:03:07.518 LINK overhead 00:03:07.518 LINK aer 00:03:07.518 LINK hello_bdev 00:03:07.518 LINK spdk_dd 00:03:07.783 LINK hello_blob 00:03:07.783 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:07.783 CXX test/cpp_headers/trace_parser.o 00:03:07.783 CXX test/cpp_headers/ublk.o 00:03:07.783 CXX test/cpp_headers/tree.o 00:03:07.783 LINK nvmf 00:03:07.783 LINK idxd_perf 00:03:07.783 LINK nvme_compliance 00:03:07.783 CXX test/cpp_headers/util.o 00:03:07.783 LINK thread 00:03:07.783 LINK fdp 00:03:07.783 CXX test/cpp_headers/uuid.o 00:03:07.783 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:07.783 CXX test/cpp_headers/version.o 00:03:07.783 CXX test/cpp_headers/vfio_user_pci.o 00:03:07.783 CXX test/cpp_headers/vfio_user_spec.o 00:03:07.783 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:07.783 CXX test/cpp_headers/vhost.o 00:03:07.783 CXX test/cpp_headers/vmd.o 00:03:07.783 LINK spdk_trace 00:03:07.783 CXX test/cpp_headers/xor.o 00:03:07.783 LINK arbitration 00:03:07.783 CXX test/cpp_headers/zipf.o 00:03:07.783 LINK reconnect 00:03:07.783 LINK bdevio 00:03:07.783 LINK test_dma 00:03:07.783 LINK pci_ut 00:03:07.783 LINK dif 00:03:07.783 LINK abort 00:03:08.041 LINK accel_perf 00:03:08.041 LINK blobcli 00:03:08.041 LINK nvme_fuzz 00:03:08.041 LINK spdk_nvme 00:03:08.041 LINK nvme_manage 00:03:08.041 LINK spdk_bdev 00:03:08.041 LINK mem_callbacks 00:03:08.301 LINK spdk_nvme_perf 00:03:08.301 LINK spdk_top 00:03:08.301 LINK spdk_nvme_identify 00:03:08.301 LINK bdevperf 00:03:08.301 LINK vhost_fuzz 00:03:08.560 LINK memory_ut 00:03:08.560 LINK cuse 00:03:09.129 LINK iscsi_fuzz 00:03:11.112 LINK esnap 00:03:11.371 00:03:11.371 real 0m33.813s 00:03:11.371 user 5m7.004s 00:03:11.371 sys 3m2.788s 00:03:11.371 20:50:02 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:11.371 20:50:02 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.371 ************************************ 00:03:11.371 END TEST make 00:03:11.371 ************************************ 00:03:11.371 20:50:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.371 20:50:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.371 20:50:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.371 20:50:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.371 20:50:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.371 20:50:02 -- pm/common@44 -- $ pid=3227122 00:03:11.371 20:50:02 -- pm/common@50 -- $ kill -TERM 3227122 00:03:11.371 20:50:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.371 20:50:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.371 20:50:02 -- pm/common@44 -- $ pid=3227124 00:03:11.371 20:50:02 -- pm/common@50 -- $ kill -TERM 3227124 00:03:11.371 20:50:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.371 20:50:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:11.371 20:50:02 -- pm/common@44 -- $ pid=3227126 00:03:11.371 20:50:02 -- pm/common@50 -- $ kill -TERM 3227126 00:03:11.371 20:50:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.371 20:50:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:11.371 20:50:02 -- pm/common@44 -- $ pid=3227144 00:03:11.371 20:50:02 -- pm/common@50 -- $ sudo -E kill -TERM 3227144 00:03:11.630 20:50:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:11.630 20:50:02 -- nvmf/common.sh@7 -- # uname -s 00:03:11.630 20:50:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.630 20:50:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.631 20:50:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.631 20:50:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.631 20:50:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.631 20:50:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.631 20:50:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.631 20:50:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.631 20:50:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.631 20:50:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.631 20:50:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:11.631 20:50:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:11.631 20:50:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.631 20:50:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.631 20:50:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:11.631 20:50:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.631 20:50:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:11.631 20:50:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.631 20:50:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.631 20:50:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.631 20:50:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.631 20:50:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.631 20:50:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.631 20:50:02 -- paths/export.sh@5 -- # export PATH 00:03:11.631 20:50:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.631 20:50:02 -- nvmf/common.sh@47 -- # : 0 00:03:11.631 20:50:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:11.631 20:50:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:11.631 20:50:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.631 20:50:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.631 20:50:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.631 20:50:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:11.631 20:50:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:11.631 20:50:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:11.631 20:50:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.631 20:50:02 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.631 20:50:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.631 20:50:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.631 20:50:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:11.631 20:50:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.631 20:50:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:11.631 20:50:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.631 20:50:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.631 20:50:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.631 20:50:02 -- spdk/autotest.sh@48 -- # udevadm_pid=3300973 00:03:11.631 20:50:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.631 20:50:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.631 20:50:02 -- pm/common@17 -- # local monitor 00:03:11.631 20:50:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.631 20:50:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.631 20:50:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.631 20:50:02 -- pm/common@21 -- # date +%s 00:03:11.631 20:50:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.631 20:50:02 -- pm/common@21 -- # date +%s 00:03:11.631 20:50:02 -- pm/common@21 -- # date +%s 00:03:11.631 20:50:02 -- pm/common@25 -- # sleep 1 00:03:11.631 20:50:02 -- pm/common@21 -- # date +%s 00:03:11.631 20:50:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720896602 00:03:11.631 20:50:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720896602 00:03:11.631 20:50:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720896602 00:03:11.631 20:50:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720896602 00:03:11.631 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720896602_collect-vmstat.pm.log 00:03:11.631 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720896602_collect-cpu-load.pm.log 00:03:11.631 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720896602_collect-cpu-temp.pm.log 00:03:11.631 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720896602_collect-bmc-pm.bmc.pm.log 00:03:12.567 20:50:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.567 20:50:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.567 20:50:03 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:12.567 20:50:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.567 20:50:03 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.567 20:50:03 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:12.567 20:50:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.567 20:50:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:12.567 20:50:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:12.567 20:50:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:12.567 20:50:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:12.567 20:50:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:12.567 20:50:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.567 20:50:03 -- common/autotest_common.sh@1451 -- # uname 00:03:12.567 20:50:03 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:12.567 20:50:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.567 20:50:03 -- common/autotest_common.sh@1471 -- # uname 00:03:12.567 20:50:03 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:12.567 20:50:03 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:12.567 20:50:03 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:12.567 20:50:03 -- spdk/autotest.sh@72 -- # hash lcov 00:03:12.567 20:50:03 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:12.567 20:50:03 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:12.567 --rc lcov_branch_coverage=1 00:03:12.567 --rc lcov_function_coverage=1 00:03:12.567 --rc genhtml_branch_coverage=1 00:03:12.567 --rc genhtml_function_coverage=1 00:03:12.567 --rc genhtml_legend=1 00:03:12.567 --rc geninfo_all_blocks=1 00:03:12.567 ' 00:03:12.567 20:50:03 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:12.567 --rc lcov_branch_coverage=1 00:03:12.567 --rc lcov_function_coverage=1 00:03:12.567 --rc genhtml_branch_coverage=1 00:03:12.567 --rc genhtml_function_coverage=1 00:03:12.567 --rc genhtml_legend=1 00:03:12.567 --rc geninfo_all_blocks=1 00:03:12.567 ' 00:03:12.567 20:50:03 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:12.567 --rc lcov_branch_coverage=1 00:03:12.567 --rc lcov_function_coverage=1 00:03:12.567 --rc genhtml_branch_coverage=1 00:03:12.567 --rc genhtml_function_coverage=1 00:03:12.567 --rc genhtml_legend=1 00:03:12.567 --rc geninfo_all_blocks=1 00:03:12.567 --no-external' 00:03:12.567 20:50:03 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:12.567 --rc lcov_branch_coverage=1 00:03:12.567 --rc lcov_function_coverage=1 00:03:12.567 --rc genhtml_branch_coverage=1 00:03:12.567 --rc genhtml_function_coverage=1 00:03:12.567 --rc genhtml_legend=1 00:03:12.567 --rc geninfo_all_blocks=1 00:03:12.567 --no-external' 00:03:12.567 20:50:03 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:12.824 lcov: LCOV version 1.14 00:03:12.824 20:50:03 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:22.845 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:22.845 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:30.966 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:30.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:30.967 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:30.967 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:31.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:31.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:31.487 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:31.487 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:34.782 20:50:25 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:34.782 20:50:25 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:34.782 20:50:25 -- common/autotest_common.sh@10 -- # set +x 00:03:34.782 20:50:25 -- spdk/autotest.sh@91 -- # rm -f 00:03:34.782 20:50:25 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.074 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:38.074 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:38.074 20:50:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:38.074 20:50:28 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:38.074 20:50:28 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:38.074 20:50:28 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:38.074 20:50:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:38.074 20:50:28 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:38.074 20:50:28 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:38.074 20:50:28 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.074 20:50:28 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:38.074 20:50:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:38.074 20:50:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.074 20:50:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:38.074 20:50:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:38.074 20:50:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:38.074 20:50:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:38.074 No valid GPT data, bailing 00:03:38.075 20:50:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:38.075 20:50:28 -- scripts/common.sh@391 -- # pt= 00:03:38.075 20:50:28 -- scripts/common.sh@392 -- # return 1 00:03:38.075 20:50:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:38.075 1+0 records in 00:03:38.075 1+0 records out 00:03:38.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393321 s, 267 MB/s 00:03:38.075 20:50:28 -- spdk/autotest.sh@118 -- # sync 00:03:38.334 20:50:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:38.334 20:50:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:38.334 20:50:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:46.457 20:50:36 -- spdk/autotest.sh@124 -- # uname -s 00:03:46.457 20:50:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:46.457 20:50:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:46.457 20:50:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:46.457 20:50:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:46.457 20:50:36 -- common/autotest_common.sh@10 -- # set +x 00:03:46.457 ************************************ 00:03:46.457 START TEST setup.sh 00:03:46.457 ************************************ 00:03:46.457 20:50:36 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:46.457 * Looking for test storage... 00:03:46.457 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:46.457 20:50:36 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:46.457 20:50:36 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:46.457 20:50:36 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:46.457 20:50:36 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:46.457 20:50:36 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:46.457 20:50:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.457 ************************************ 00:03:46.457 START TEST acl 00:03:46.457 ************************************ 00:03:46.457 20:50:36 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:46.457 * Looking for test storage... 00:03:46.457 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:46.457 20:50:36 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:46.457 20:50:36 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:46.457 20:50:36 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:46.457 20:50:36 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:46.457 20:50:36 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:46.457 20:50:36 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:46.457 20:50:36 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:46.457 20:50:36 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.457 20:50:36 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:46.457 20:50:36 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:46.457 20:50:36 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:46.457 20:50:36 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:46.457 20:50:36 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:46.457 20:50:36 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:46.457 20:50:36 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.457 20:50:36 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.750 20:50:39 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:49.750 20:50:39 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:49.750 20:50:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:49.750 20:50:39 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:49.750 20:50:39 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.750 20:50:39 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:53.040 Hugepages 00:03:53.040 node hugesize free / total 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 00:03:53.040 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:53.040 20:50:43 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:53.040 20:50:43 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:53.040 20:50:43 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:53.040 20:50:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:53.040 ************************************ 00:03:53.040 START TEST denied 00:03:53.040 ************************************ 00:03:53.040 20:50:43 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:53.040 20:50:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:53.040 20:50:43 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:53.040 20:50:43 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:53.040 20:50:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.040 20:50:43 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:57.234 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:57.234 20:50:47 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:57.234 20:50:47 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:57.234 20:50:47 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:57.234 20:50:47 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:57.234 20:50:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:57.235 20:50:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:57.235 20:50:47 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:57.235 20:50:47 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:57.235 20:50:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.235 20:50:47 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.428 00:04:01.428 real 0m8.380s 00:04:01.428 user 0m2.733s 00:04:01.428 sys 0m5.030s 00:04:01.428 20:50:51 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:01.428 20:50:51 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:01.428 ************************************ 00:04:01.428 END TEST denied 00:04:01.428 ************************************ 00:04:01.428 20:50:52 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:01.428 20:50:52 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:01.428 20:50:52 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:01.428 20:50:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:01.428 ************************************ 00:04:01.428 START TEST allowed 00:04:01.428 ************************************ 00:04:01.428 20:50:52 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:01.428 20:50:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:01.428 20:50:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:01.428 20:50:52 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:01.428 20:50:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.428 20:50:52 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:06.699 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:06.699 20:50:57 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:06.699 20:50:57 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:06.699 20:50:57 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:06.699 20:50:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.699 20:50:57 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.987 00:04:09.987 real 0m8.413s 00:04:09.987 user 0m2.088s 00:04:09.987 sys 0m4.396s 00:04:09.987 20:51:00 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.987 20:51:00 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:09.987 ************************************ 00:04:09.987 END TEST allowed 00:04:09.987 ************************************ 00:04:09.987 00:04:09.987 real 0m24.330s 00:04:09.987 user 0m7.457s 00:04:09.987 sys 0m14.599s 00:04:09.987 20:51:00 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.987 20:51:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:09.987 ************************************ 00:04:09.987 END TEST acl 00:04:09.987 ************************************ 00:04:09.987 20:51:00 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:09.987 20:51:00 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.987 20:51:00 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.987 20:51:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.987 ************************************ 00:04:09.987 START TEST hugepages 00:04:09.987 ************************************ 00:04:09.987 20:51:00 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:09.987 * Looking for test storage... 00:04:09.987 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 41162456 kB' 'MemAvailable: 44611336 kB' 'Buffers: 4096 kB' 'Cached: 11039340 kB' 'SwapCached: 0 kB' 'Active: 8080220 kB' 'Inactive: 3436880 kB' 'Active(anon): 7694620 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477408 kB' 'Mapped: 173008 kB' 'Shmem: 7220956 kB' 'KReclaimable: 247076 kB' 'Slab: 799768 kB' 'SReclaimable: 247076 kB' 'SUnreclaim: 552692 kB' 'KernelStack: 22208 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439044 kB' 'Committed_AS: 9025928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216716 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.987 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.988 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:09.989 20:51:00 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:09.989 20:51:00 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.989 20:51:00 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.989 20:51:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.989 ************************************ 00:04:09.989 START TEST default_setup 00:04:09.989 ************************************ 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.989 20:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:13.381 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.381 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:15.310 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43233860 kB' 'MemAvailable: 46682608 kB' 'Buffers: 4096 kB' 'Cached: 11039468 kB' 'SwapCached: 0 kB' 'Active: 8094128 kB' 'Inactive: 3436880 kB' 'Active(anon): 7708528 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490804 kB' 'Mapped: 172260 kB' 'Shmem: 7221084 kB' 'KReclaimable: 246812 kB' 'Slab: 798132 kB' 'SReclaimable: 246812 kB' 'SUnreclaim: 551320 kB' 'KernelStack: 22096 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9036224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216860 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.310 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.311 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43232500 kB' 'MemAvailable: 46681248 kB' 'Buffers: 4096 kB' 'Cached: 11039472 kB' 'SwapCached: 0 kB' 'Active: 8093616 kB' 'Inactive: 3436880 kB' 'Active(anon): 7708016 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490392 kB' 'Mapped: 172244 kB' 'Shmem: 7221088 kB' 'KReclaimable: 246812 kB' 'Slab: 798132 kB' 'SReclaimable: 246812 kB' 'SUnreclaim: 551320 kB' 'KernelStack: 22224 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9037876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216860 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.312 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.313 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43230588 kB' 'MemAvailable: 46679336 kB' 'Buffers: 4096 kB' 'Cached: 11039488 kB' 'SwapCached: 0 kB' 'Active: 8093404 kB' 'Inactive: 3436880 kB' 'Active(anon): 7707804 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490096 kB' 'Mapped: 172244 kB' 'Shmem: 7221104 kB' 'KReclaimable: 246812 kB' 'Slab: 798124 kB' 'SReclaimable: 246812 kB' 'SUnreclaim: 551312 kB' 'KernelStack: 22160 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9037904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216860 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.314 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.315 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.316 nr_hugepages=1024 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.316 resv_hugepages=0 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.316 surplus_hugepages=0 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.316 anon_hugepages=0 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43231780 kB' 'MemAvailable: 46680528 kB' 'Buffers: 4096 kB' 'Cached: 11039524 kB' 'SwapCached: 0 kB' 'Active: 8092896 kB' 'Inactive: 3436880 kB' 'Active(anon): 7707296 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489552 kB' 'Mapped: 172304 kB' 'Shmem: 7221140 kB' 'KReclaimable: 246812 kB' 'Slab: 798252 kB' 'SReclaimable: 246812 kB' 'SUnreclaim: 551440 kB' 'KernelStack: 22096 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9036484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216940 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.316 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.317 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27300228 kB' 'MemUsed: 5338912 kB' 'SwapCached: 0 kB' 'Active: 1791796 kB' 'Inactive: 72192 kB' 'Active(anon): 1620068 kB' 'Inactive(anon): 0 kB' 'Active(file): 171728 kB' 'Inactive(file): 72192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1636276 kB' 'Mapped: 79092 kB' 'AnonPages: 230972 kB' 'Shmem: 1392356 kB' 'KernelStack: 11528 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111168 kB' 'Slab: 335992 kB' 'SReclaimable: 111168 kB' 'SUnreclaim: 224824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.318 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:15.319 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.320 node0=1024 expecting 1024 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.320 00:04:15.320 real 0m5.250s 00:04:15.320 user 0m1.170s 00:04:15.320 sys 0m2.251s 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.320 20:51:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:15.320 ************************************ 00:04:15.320 END TEST default_setup 00:04:15.320 ************************************ 00:04:15.320 20:51:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:15.320 20:51:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.320 20:51:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.320 20:51:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.320 ************************************ 00:04:15.320 START TEST per_node_1G_alloc 00:04:15.320 ************************************ 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.320 20:51:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:18.627 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.627 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43244804 kB' 'MemAvailable: 46693552 kB' 'Buffers: 4096 kB' 'Cached: 11039612 kB' 'SwapCached: 0 kB' 'Active: 8093728 kB' 'Inactive: 3436880 kB' 'Active(anon): 7708128 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490000 kB' 'Mapped: 172372 kB' 'Shmem: 7221228 kB' 'KReclaimable: 246812 kB' 'Slab: 797928 kB' 'SReclaimable: 246812 kB' 'SUnreclaim: 551116 kB' 'KernelStack: 22368 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9036852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217084 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.627 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.628 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43248904 kB' 'MemAvailable: 46697652 kB' 'Buffers: 4096 kB' 'Cached: 11039632 kB' 'SwapCached: 0 kB' 'Active: 8093692 kB' 'Inactive: 3436880 kB' 'Active(anon): 7708092 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490660 kB' 'Mapped: 172332 kB' 'Shmem: 7221248 kB' 'KReclaimable: 246812 kB' 'Slab: 798064 kB' 'SReclaimable: 246812 kB' 'SUnreclaim: 551252 kB' 'KernelStack: 22160 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9035616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216956 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.629 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.630 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43250316 kB' 'MemAvailable: 46699064 kB' 'Buffers: 4096 kB' 'Cached: 11039652 kB' 'SwapCached: 0 kB' 'Active: 8093408 kB' 'Inactive: 3436880 kB' 'Active(anon): 7707808 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489800 kB' 'Mapped: 172324 kB' 'Shmem: 7221268 kB' 'KReclaimable: 246812 kB' 'Slab: 797968 kB' 'SReclaimable: 246812 kB' 'SUnreclaim: 551156 kB' 'KernelStack: 22176 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9037264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216972 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.631 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.632 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.633 nr_hugepages=1024 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.633 resv_hugepages=0 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.633 surplus_hugepages=0 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.633 anon_hugepages=0 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43250576 kB' 'MemAvailable: 46699324 kB' 'Buffers: 4096 kB' 'Cached: 11039668 kB' 'SwapCached: 0 kB' 'Active: 8093872 kB' 'Inactive: 3436880 kB' 'Active(anon): 7708272 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490244 kB' 'Mapped: 172320 kB' 'Shmem: 7221284 kB' 'KReclaimable: 246812 kB' 'Slab: 797968 kB' 'SReclaimable: 246812 kB' 'SUnreclaim: 551156 kB' 'KernelStack: 22128 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9037284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216988 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.633 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.634 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28337176 kB' 'MemUsed: 4301964 kB' 'SwapCached: 0 kB' 'Active: 1791200 kB' 'Inactive: 72192 kB' 'Active(anon): 1619472 kB' 'Inactive(anon): 0 kB' 'Active(file): 171728 kB' 'Inactive(file): 72192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1636396 kB' 'Mapped: 79096 kB' 'AnonPages: 230148 kB' 'Shmem: 1392476 kB' 'KernelStack: 11480 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111168 kB' 'Slab: 336084 kB' 'SReclaimable: 111168 kB' 'SUnreclaim: 224916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.635 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.636 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656044 kB' 'MemFree: 14915296 kB' 'MemUsed: 12740748 kB' 'SwapCached: 0 kB' 'Active: 6302024 kB' 'Inactive: 3364688 kB' 'Active(anon): 6088152 kB' 'Inactive(anon): 0 kB' 'Active(file): 213872 kB' 'Inactive(file): 3364688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9407396 kB' 'Mapped: 93232 kB' 'AnonPages: 259436 kB' 'Shmem: 5828836 kB' 'KernelStack: 10568 kB' 'PageTables: 3492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135644 kB' 'Slab: 461920 kB' 'SReclaimable: 135644 kB' 'SUnreclaim: 326276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.637 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.638 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.638 node0=512 expecting 512 00:04:18.639 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.639 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.639 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.639 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:18.639 node1=512 expecting 512 00:04:18.639 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:18.639 00:04:18.639 real 0m3.371s 00:04:18.639 user 0m1.292s 00:04:18.639 sys 0m2.147s 00:04:18.639 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.639 20:51:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.639 ************************************ 00:04:18.639 END TEST per_node_1G_alloc 00:04:18.639 ************************************ 00:04:18.898 20:51:09 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:18.898 20:51:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.898 20:51:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.898 20:51:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.898 ************************************ 00:04:18.898 START TEST even_2G_alloc 00:04:18.898 ************************************ 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.898 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.899 20:51:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:22.196 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.196 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43290408 kB' 'MemAvailable: 46739104 kB' 'Buffers: 4096 kB' 'Cached: 11039780 kB' 'SwapCached: 0 kB' 'Active: 8092520 kB' 'Inactive: 3436880 kB' 'Active(anon): 7706920 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488732 kB' 'Mapped: 171284 kB' 'Shmem: 7221396 kB' 'KReclaimable: 246708 kB' 'Slab: 797936 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551228 kB' 'KernelStack: 22016 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9027580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216908 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.196 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43290636 kB' 'MemAvailable: 46739332 kB' 'Buffers: 4096 kB' 'Cached: 11039784 kB' 'SwapCached: 0 kB' 'Active: 8092408 kB' 'Inactive: 3436880 kB' 'Active(anon): 7706808 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488696 kB' 'Mapped: 171264 kB' 'Shmem: 7221400 kB' 'KReclaimable: 246708 kB' 'Slab: 797936 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551228 kB' 'KernelStack: 22080 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9027228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216876 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.197 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43292160 kB' 'MemAvailable: 46740856 kB' 'Buffers: 4096 kB' 'Cached: 11039800 kB' 'SwapCached: 0 kB' 'Active: 8092296 kB' 'Inactive: 3436880 kB' 'Active(anon): 7706696 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488572 kB' 'Mapped: 171264 kB' 'Shmem: 7221416 kB' 'KReclaimable: 246708 kB' 'Slab: 797936 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551228 kB' 'KernelStack: 22032 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9027248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216860 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.198 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.199 nr_hugepages=1024 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.199 resv_hugepages=0 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.199 surplus_hugepages=0 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.199 anon_hugepages=0 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43292400 kB' 'MemAvailable: 46741096 kB' 'Buffers: 4096 kB' 'Cached: 11039824 kB' 'SwapCached: 0 kB' 'Active: 8091332 kB' 'Inactive: 3436880 kB' 'Active(anon): 7705732 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487568 kB' 'Mapped: 171264 kB' 'Shmem: 7221440 kB' 'KReclaimable: 246708 kB' 'Slab: 797936 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551228 kB' 'KernelStack: 21984 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9027404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216860 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.199 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28361980 kB' 'MemUsed: 4277160 kB' 'SwapCached: 0 kB' 'Active: 1790052 kB' 'Inactive: 72192 kB' 'Active(anon): 1618324 kB' 'Inactive(anon): 0 kB' 'Active(file): 171728 kB' 'Inactive(file): 72192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1636520 kB' 'Mapped: 78184 kB' 'AnonPages: 228888 kB' 'Shmem: 1392600 kB' 'KernelStack: 11464 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111072 kB' 'Slab: 335728 kB' 'SReclaimable: 111072 kB' 'SUnreclaim: 224656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.200 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656044 kB' 'MemFree: 14930420 kB' 'MemUsed: 12725624 kB' 'SwapCached: 0 kB' 'Active: 6301680 kB' 'Inactive: 3364688 kB' 'Active(anon): 6087808 kB' 'Inactive(anon): 0 kB' 'Active(file): 213872 kB' 'Inactive(file): 3364688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9407420 kB' 'Mapped: 93080 kB' 'AnonPages: 259048 kB' 'Shmem: 5828860 kB' 'KernelStack: 10520 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135636 kB' 'Slab: 462208 kB' 'SReclaimable: 135636 kB' 'SUnreclaim: 326572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.201 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.202 node0=512 expecting 512 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:22.202 node1=512 expecting 512 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:22.202 00:04:22.202 real 0m3.257s 00:04:22.202 user 0m1.138s 00:04:22.202 sys 0m2.117s 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.202 20:51:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.202 ************************************ 00:04:22.202 END TEST even_2G_alloc 00:04:22.202 ************************************ 00:04:22.202 20:51:12 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:22.202 20:51:12 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.202 20:51:12 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.202 20:51:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.202 ************************************ 00:04:22.202 START TEST odd_alloc 00:04:22.202 ************************************ 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.202 20:51:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:25.489 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.489 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43296504 kB' 'MemAvailable: 46745200 kB' 'Buffers: 4096 kB' 'Cached: 11039952 kB' 'SwapCached: 0 kB' 'Active: 8092148 kB' 'Inactive: 3436880 kB' 'Active(anon): 7706548 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487784 kB' 'Mapped: 171400 kB' 'Shmem: 7221568 kB' 'KReclaimable: 246708 kB' 'Slab: 797880 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551172 kB' 'KernelStack: 22000 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486596 kB' 'Committed_AS: 9028684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216764 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.757 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.758 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43296640 kB' 'MemAvailable: 46745336 kB' 'Buffers: 4096 kB' 'Cached: 11039956 kB' 'SwapCached: 0 kB' 'Active: 8091400 kB' 'Inactive: 3436880 kB' 'Active(anon): 7705800 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487536 kB' 'Mapped: 171272 kB' 'Shmem: 7221572 kB' 'KReclaimable: 246708 kB' 'Slab: 797876 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551168 kB' 'KernelStack: 22000 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486596 kB' 'Committed_AS: 9028700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216764 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.759 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.760 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43296640 kB' 'MemAvailable: 46745336 kB' 'Buffers: 4096 kB' 'Cached: 11039956 kB' 'SwapCached: 0 kB' 'Active: 8091400 kB' 'Inactive: 3436880 kB' 'Active(anon): 7705800 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487536 kB' 'Mapped: 171272 kB' 'Shmem: 7221572 kB' 'KReclaimable: 246708 kB' 'Slab: 797876 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551168 kB' 'KernelStack: 22000 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486596 kB' 'Committed_AS: 9028720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216780 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.761 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.762 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:25.763 nr_hugepages=1025 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.763 resv_hugepages=0 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.763 surplus_hugepages=0 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.763 anon_hugepages=0 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43296640 kB' 'MemAvailable: 46745336 kB' 'Buffers: 4096 kB' 'Cached: 11040012 kB' 'SwapCached: 0 kB' 'Active: 8091092 kB' 'Inactive: 3436880 kB' 'Active(anon): 7705492 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487136 kB' 'Mapped: 171272 kB' 'Shmem: 7221628 kB' 'KReclaimable: 246708 kB' 'Slab: 797876 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551168 kB' 'KernelStack: 21984 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486596 kB' 'Committed_AS: 9028740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216780 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.763 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.764 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28357140 kB' 'MemUsed: 4282000 kB' 'SwapCached: 0 kB' 'Active: 1790996 kB' 'Inactive: 72192 kB' 'Active(anon): 1619268 kB' 'Inactive(anon): 0 kB' 'Active(file): 171728 kB' 'Inactive(file): 72192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1636596 kB' 'Mapped: 78184 kB' 'AnonPages: 229804 kB' 'Shmem: 1392676 kB' 'KernelStack: 11464 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111072 kB' 'Slab: 335860 kB' 'SReclaimable: 111072 kB' 'SUnreclaim: 224788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.765 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656044 kB' 'MemFree: 14939248 kB' 'MemUsed: 12716796 kB' 'SwapCached: 0 kB' 'Active: 6300492 kB' 'Inactive: 3364688 kB' 'Active(anon): 6086620 kB' 'Inactive(anon): 0 kB' 'Active(file): 213872 kB' 'Inactive(file): 3364688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9407536 kB' 'Mapped: 93088 kB' 'AnonPages: 257728 kB' 'Shmem: 5828976 kB' 'KernelStack: 10536 kB' 'PageTables: 3380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135636 kB' 'Slab: 462016 kB' 'SReclaimable: 135636 kB' 'SUnreclaim: 326380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.766 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.767 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:25.768 node0=512 expecting 513 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:25.768 node1=513 expecting 512 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:25.768 00:04:25.768 real 0m3.693s 00:04:25.768 user 0m1.371s 00:04:25.768 sys 0m2.388s 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.768 20:51:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.768 ************************************ 00:04:25.768 END TEST odd_alloc 00:04:25.768 ************************************ 00:04:25.768 20:51:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:25.768 20:51:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.768 20:51:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.768 20:51:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.028 ************************************ 00:04:26.028 START TEST custom_alloc 00:04:26.028 ************************************ 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:26.028 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.029 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:26.029 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.029 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:26.029 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:26.029 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:26.029 20:51:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:26.029 20:51:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.029 20:51:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:29.331 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.331 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.331 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.331 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.331 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.331 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.331 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.331 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.331 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.331 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.332 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.332 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.332 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.332 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.332 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.332 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.332 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 42299320 kB' 'MemAvailable: 45748016 kB' 'Buffers: 4096 kB' 'Cached: 11040120 kB' 'SwapCached: 0 kB' 'Active: 8092912 kB' 'Inactive: 3436880 kB' 'Active(anon): 7707312 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488352 kB' 'Mapped: 171412 kB' 'Shmem: 7221736 kB' 'KReclaimable: 246708 kB' 'Slab: 797844 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551136 kB' 'KernelStack: 22016 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963332 kB' 'Committed_AS: 9029372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216780 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.332 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.333 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 42299884 kB' 'MemAvailable: 45748580 kB' 'Buffers: 4096 kB' 'Cached: 11040124 kB' 'SwapCached: 0 kB' 'Active: 8093060 kB' 'Inactive: 3436880 kB' 'Active(anon): 7707460 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488612 kB' 'Mapped: 171364 kB' 'Shmem: 7221740 kB' 'KReclaimable: 246708 kB' 'Slab: 797844 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551136 kB' 'KernelStack: 22032 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963332 kB' 'Committed_AS: 9030512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216748 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.334 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.335 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 42301300 kB' 'MemAvailable: 45749996 kB' 'Buffers: 4096 kB' 'Cached: 11040124 kB' 'SwapCached: 0 kB' 'Active: 8092688 kB' 'Inactive: 3436880 kB' 'Active(anon): 7707088 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488684 kB' 'Mapped: 171288 kB' 'Shmem: 7221740 kB' 'KReclaimable: 246708 kB' 'Slab: 797832 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551124 kB' 'KernelStack: 22000 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963332 kB' 'Committed_AS: 9030804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216748 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.336 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.337 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:29.338 nr_hugepages=1536 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.338 resv_hugepages=0 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.338 surplus_hugepages=0 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.338 anon_hugepages=0 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 42304276 kB' 'MemAvailable: 45752972 kB' 'Buffers: 4096 kB' 'Cached: 11040164 kB' 'SwapCached: 0 kB' 'Active: 8092048 kB' 'Inactive: 3436880 kB' 'Active(anon): 7706448 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487944 kB' 'Mapped: 171288 kB' 'Shmem: 7221780 kB' 'KReclaimable: 246708 kB' 'Slab: 797832 kB' 'SReclaimable: 246708 kB' 'SUnreclaim: 551124 kB' 'KernelStack: 22000 kB' 'PageTables: 7320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963332 kB' 'Committed_AS: 9030660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216732 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.338 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:29.339 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28388700 kB' 'MemUsed: 4250440 kB' 'SwapCached: 0 kB' 'Active: 1792024 kB' 'Inactive: 72192 kB' 'Active(anon): 1620296 kB' 'Inactive(anon): 0 kB' 'Active(file): 171728 kB' 'Inactive(file): 72192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1636720 kB' 'Mapped: 78188 kB' 'AnonPages: 230680 kB' 'Shmem: 1392800 kB' 'KernelStack: 11480 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111072 kB' 'Slab: 335612 kB' 'SReclaimable: 111072 kB' 'SUnreclaim: 224540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.340 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656044 kB' 'MemFree: 13914012 kB' 'MemUsed: 13742032 kB' 'SwapCached: 0 kB' 'Active: 6300956 kB' 'Inactive: 3364688 kB' 'Active(anon): 6087084 kB' 'Inactive(anon): 0 kB' 'Active(file): 213872 kB' 'Inactive(file): 3364688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9407560 kB' 'Mapped: 93100 kB' 'AnonPages: 258168 kB' 'Shmem: 5829000 kB' 'KernelStack: 10776 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135636 kB' 'Slab: 462220 kB' 'SReclaimable: 135636 kB' 'SUnreclaim: 326584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.341 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.343 node0=512 expecting 512 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:29.343 node1=1024 expecting 1024 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:29.343 00:04:29.343 real 0m3.194s 00:04:29.343 user 0m1.186s 00:04:29.343 sys 0m2.021s 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.343 20:51:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.343 ************************************ 00:04:29.343 END TEST custom_alloc 00:04:29.343 ************************************ 00:04:29.343 20:51:19 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:29.343 20:51:19 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.343 20:51:19 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.343 20:51:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.343 ************************************ 00:04:29.343 START TEST no_shrink_alloc 00:04:29.343 ************************************ 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.343 20:51:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:32.682 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.682 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43366068 kB' 'MemAvailable: 46814692 kB' 'Buffers: 4096 kB' 'Cached: 11040276 kB' 'SwapCached: 0 kB' 'Active: 8094636 kB' 'Inactive: 3436880 kB' 'Active(anon): 7709036 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490432 kB' 'Mapped: 171388 kB' 'Shmem: 7221892 kB' 'KReclaimable: 246564 kB' 'Slab: 798292 kB' 'SReclaimable: 246564 kB' 'SUnreclaim: 551728 kB' 'KernelStack: 22080 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9032872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217052 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.682 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43366740 kB' 'MemAvailable: 46815364 kB' 'Buffers: 4096 kB' 'Cached: 11040280 kB' 'SwapCached: 0 kB' 'Active: 8094072 kB' 'Inactive: 3436880 kB' 'Active(anon): 7708472 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489892 kB' 'Mapped: 171236 kB' 'Shmem: 7221896 kB' 'KReclaimable: 246564 kB' 'Slab: 798328 kB' 'SReclaimable: 246564 kB' 'SUnreclaim: 551764 kB' 'KernelStack: 22144 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9032892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216988 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.683 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.684 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43365976 kB' 'MemAvailable: 46814600 kB' 'Buffers: 4096 kB' 'Cached: 11040296 kB' 'SwapCached: 0 kB' 'Active: 8094160 kB' 'Inactive: 3436880 kB' 'Active(anon): 7708560 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489916 kB' 'Mapped: 171296 kB' 'Shmem: 7221912 kB' 'KReclaimable: 246564 kB' 'Slab: 798328 kB' 'SReclaimable: 246564 kB' 'SUnreclaim: 551764 kB' 'KernelStack: 22064 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9032912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217004 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.685 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.686 nr_hugepages=1024 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.686 resv_hugepages=0 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.686 surplus_hugepages=0 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.686 anon_hugepages=0 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43365176 kB' 'MemAvailable: 46813800 kB' 'Buffers: 4096 kB' 'Cached: 11040320 kB' 'SwapCached: 0 kB' 'Active: 8093672 kB' 'Inactive: 3436880 kB' 'Active(anon): 7708072 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489372 kB' 'Mapped: 171296 kB' 'Shmem: 7221936 kB' 'KReclaimable: 246564 kB' 'Slab: 798328 kB' 'SReclaimable: 246564 kB' 'SUnreclaim: 551764 kB' 'KernelStack: 22048 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9031336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216972 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.686 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27340052 kB' 'MemUsed: 5299088 kB' 'SwapCached: 0 kB' 'Active: 1793140 kB' 'Inactive: 72192 kB' 'Active(anon): 1621412 kB' 'Inactive(anon): 0 kB' 'Active(file): 171728 kB' 'Inactive(file): 72192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1636872 kB' 'Mapped: 78184 kB' 'AnonPages: 231644 kB' 'Shmem: 1392952 kB' 'KernelStack: 11480 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110928 kB' 'Slab: 335716 kB' 'SReclaimable: 110928 kB' 'SUnreclaim: 224788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.687 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.688 node0=1024 expecting 1024 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.688 20:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:35.230 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.230 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.230 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43367868 kB' 'MemAvailable: 46816492 kB' 'Buffers: 4096 kB' 'Cached: 11040412 kB' 'SwapCached: 0 kB' 'Active: 8099864 kB' 'Inactive: 3436880 kB' 'Active(anon): 7714264 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495332 kB' 'Mapped: 171880 kB' 'Shmem: 7222028 kB' 'KReclaimable: 246564 kB' 'Slab: 798556 kB' 'SReclaimable: 246564 kB' 'SUnreclaim: 551992 kB' 'KernelStack: 22016 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9039556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216780 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.230 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.231 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43375692 kB' 'MemAvailable: 46824316 kB' 'Buffers: 4096 kB' 'Cached: 11040412 kB' 'SwapCached: 0 kB' 'Active: 8094916 kB' 'Inactive: 3436880 kB' 'Active(anon): 7709316 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490700 kB' 'Mapped: 172180 kB' 'Shmem: 7222028 kB' 'KReclaimable: 246564 kB' 'Slab: 798564 kB' 'SReclaimable: 246564 kB' 'SUnreclaim: 552000 kB' 'KernelStack: 22016 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9034676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216828 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.232 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.233 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43368664 kB' 'MemAvailable: 46817288 kB' 'Buffers: 4096 kB' 'Cached: 11040436 kB' 'SwapCached: 0 kB' 'Active: 8099984 kB' 'Inactive: 3436880 kB' 'Active(anon): 7714384 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496092 kB' 'Mapped: 171804 kB' 'Shmem: 7222052 kB' 'KReclaimable: 246564 kB' 'Slab: 798564 kB' 'SReclaimable: 246564 kB' 'SUnreclaim: 552000 kB' 'KernelStack: 22192 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9039596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216896 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.234 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.235 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.236 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.498 nr_hugepages=1024 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.498 resv_hugepages=0 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.498 surplus_hugepages=0 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.498 anon_hugepages=0 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.498 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 43370908 kB' 'MemAvailable: 46819532 kB' 'Buffers: 4096 kB' 'Cached: 11040456 kB' 'SwapCached: 0 kB' 'Active: 8094660 kB' 'Inactive: 3436880 kB' 'Active(anon): 7709060 kB' 'Inactive(anon): 0 kB' 'Active(file): 385600 kB' 'Inactive(file): 3436880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490256 kB' 'Mapped: 171712 kB' 'Shmem: 7222072 kB' 'KReclaimable: 246564 kB' 'Slab: 798564 kB' 'SReclaimable: 246564 kB' 'SUnreclaim: 552000 kB' 'KernelStack: 22080 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 9033496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216924 kB' 'VmallocChunk: 0 kB' 'Percpu: 75712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2092404 kB' 'DirectMap2M: 23808000 kB' 'DirectMap1G: 42991616 kB' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.499 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.500 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27336936 kB' 'MemUsed: 5302204 kB' 'SwapCached: 0 kB' 'Active: 1792516 kB' 'Inactive: 72192 kB' 'Active(anon): 1620788 kB' 'Inactive(anon): 0 kB' 'Active(file): 171728 kB' 'Inactive(file): 72192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1636984 kB' 'Mapped: 78184 kB' 'AnonPages: 230904 kB' 'Shmem: 1393064 kB' 'KernelStack: 11480 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110928 kB' 'Slab: 335984 kB' 'SReclaimable: 110928 kB' 'SUnreclaim: 225056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.501 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.502 node0=1024 expecting 1024 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.502 00:04:35.502 real 0m6.268s 00:04:35.502 user 0m2.349s 00:04:35.502 sys 0m3.951s 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.502 20:51:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.502 ************************************ 00:04:35.502 END TEST no_shrink_alloc 00:04:35.502 ************************************ 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:35.502 20:51:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:35.502 00:04:35.502 real 0m25.632s 00:04:35.502 user 0m8.729s 00:04:35.502 sys 0m15.289s 00:04:35.502 20:51:26 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.502 20:51:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.502 ************************************ 00:04:35.502 END TEST hugepages 00:04:35.502 ************************************ 00:04:35.502 20:51:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:35.502 20:51:26 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.502 20:51:26 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.502 20:51:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:35.502 ************************************ 00:04:35.502 START TEST driver 00:04:35.502 ************************************ 00:04:35.502 20:51:26 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:35.762 * Looking for test storage... 00:04:35.762 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:35.762 20:51:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:35.762 20:51:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.762 20:51:26 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.958 20:51:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:39.958 20:51:30 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.958 20:51:30 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.958 20:51:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:40.217 ************************************ 00:04:40.217 START TEST guess_driver 00:04:40.217 ************************************ 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:40.217 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:40.217 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:40.217 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:40.217 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:40.217 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:40.217 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:40.217 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:40.217 Looking for driver=vfio-pci 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.217 20:51:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.508 20:51:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.416 20:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:45.416 20:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:45.416 20:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.416 20:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:45.416 20:51:35 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:45.416 20:51:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.416 20:51:35 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.692 00:04:50.692 real 0m9.899s 00:04:50.692 user 0m2.392s 00:04:50.692 sys 0m4.808s 00:04:50.692 20:51:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.692 20:51:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.692 ************************************ 00:04:50.692 END TEST guess_driver 00:04:50.692 ************************************ 00:04:50.692 00:04:50.692 real 0m14.513s 00:04:50.692 user 0m3.715s 00:04:50.692 sys 0m7.311s 00:04:50.692 20:51:40 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.692 20:51:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.692 ************************************ 00:04:50.692 END TEST driver 00:04:50.692 ************************************ 00:04:50.692 20:51:40 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:50.692 20:51:40 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.692 20:51:40 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.692 20:51:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.692 ************************************ 00:04:50.692 START TEST devices 00:04:50.692 ************************************ 00:04:50.692 20:51:40 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:50.692 * Looking for test storage... 00:04:50.692 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:50.692 20:51:40 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:50.692 20:51:40 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:50.692 20:51:40 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.692 20:51:40 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:53.982 20:51:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:53.982 20:51:44 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:53.982 No valid GPT data, bailing 00:04:53.982 20:51:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.982 20:51:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:53.982 20:51:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:53.982 20:51:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:53.982 20:51:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:53.982 20:51:44 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:53.982 20:51:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.982 20:51:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.982 ************************************ 00:04:53.982 START TEST nvme_mount 00:04:53.982 ************************************ 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:53.982 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.983 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:53.983 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:53.983 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.983 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:53.983 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:53.983 20:51:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:54.921 Creating new GPT entries in memory. 00:04:54.921 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:54.921 other utilities. 00:04:54.921 20:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:54.921 20:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.921 20:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:54.921 20:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:54.921 20:51:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:56.301 Creating new GPT entries in memory. 00:04:56.301 The operation has completed successfully. 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3335568 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.301 20:51:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:58.865 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:59.125 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.125 20:51:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.385 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:59.385 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:59.385 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.385 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.385 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:59.385 20:51:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:59.385 20:51:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.385 20:51:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:59.385 20:51:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:59.385 20:51:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.644 20:51:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.246 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.247 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:02.247 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.247 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.506 20:51:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.796 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.797 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.797 00:05:05.797 real 0m11.843s 00:05:05.797 user 0m3.339s 00:05:05.797 sys 0m6.338s 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.797 20:51:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:05.797 ************************************ 00:05:05.797 END TEST nvme_mount 00:05:05.797 ************************************ 00:05:05.797 20:51:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:05.797 20:51:56 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.797 20:51:56 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.797 20:51:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:06.057 ************************************ 00:05:06.057 START TEST dm_mount 00:05:06.057 ************************************ 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:06.057 20:51:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:06.996 Creating new GPT entries in memory. 00:05:06.996 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:06.996 other utilities. 00:05:06.996 20:51:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:06.996 20:51:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.996 20:51:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.996 20:51:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.996 20:51:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:07.935 Creating new GPT entries in memory. 00:05:07.935 The operation has completed successfully. 00:05:07.935 20:51:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:07.935 20:51:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.935 20:51:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.935 20:51:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.935 20:51:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:08.873 The operation has completed successfully. 00:05:08.873 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:08.873 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.873 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3339991 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.146 20:51:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.435 20:52:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:15.725 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:15.725 00:05:15.725 real 0m9.807s 00:05:15.725 user 0m2.386s 00:05:15.725 sys 0m4.541s 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.725 20:52:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:15.725 ************************************ 00:05:15.725 END TEST dm_mount 00:05:15.725 ************************************ 00:05:15.725 20:52:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:15.725 20:52:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:15.725 20:52:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.726 20:52:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.726 20:52:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.726 20:52:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.726 20:52:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.985 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:15.985 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:15.985 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.985 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:15.985 20:52:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:15.985 20:52:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:15.985 20:52:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:15.985 20:52:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.986 20:52:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:15.986 20:52:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.986 20:52:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:15.986 00:05:15.986 real 0m25.949s 00:05:15.986 user 0m7.213s 00:05:15.986 sys 0m13.611s 00:05:15.986 20:52:06 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.986 20:52:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:15.986 ************************************ 00:05:15.986 END TEST devices 00:05:15.986 ************************************ 00:05:16.245 00:05:16.245 real 1m30.834s 00:05:16.245 user 0m27.269s 00:05:16.245 sys 0m51.101s 00:05:16.245 20:52:06 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.245 20:52:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:16.245 ************************************ 00:05:16.245 END TEST setup.sh 00:05:16.245 ************************************ 00:05:16.245 20:52:06 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:19.538 Hugepages 00:05:19.538 node hugesize free / total 00:05:19.538 node0 1048576kB 0 / 0 00:05:19.538 node0 2048kB 2048 / 2048 00:05:19.538 node1 1048576kB 0 / 0 00:05:19.538 node1 2048kB 0 / 0 00:05:19.538 00:05:19.538 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:19.538 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:19.538 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:19.538 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:19.538 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:19.538 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:19.538 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:19.538 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:19.538 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:19.538 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:19.538 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:19.538 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:19.538 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:19.538 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:19.538 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:19.538 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:19.538 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:19.538 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:19.538 20:52:10 -- spdk/autotest.sh@130 -- # uname -s 00:05:19.538 20:52:10 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:19.538 20:52:10 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:19.538 20:52:10 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:22.831 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.831 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:24.764 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.764 20:52:15 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:26.178 20:52:16 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:26.178 20:52:16 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:26.178 20:52:16 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.178 20:52:16 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:26.178 20:52:16 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:26.178 20:52:16 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:26.178 20:52:16 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.178 20:52:16 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:26.178 20:52:16 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:26.178 20:52:16 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:26.178 20:52:16 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:05:26.178 20:52:16 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.468 Waiting for block devices as requested 00:05:29.468 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:29.468 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:29.468 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:29.468 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:29.468 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:29.728 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:29.728 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:29.728 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:29.988 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:29.988 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:29.988 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:30.247 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:30.247 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:30.247 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:30.507 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:30.507 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:30.507 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:30.767 20:52:21 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:30.767 20:52:21 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:30.767 20:52:21 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:30.767 20:52:21 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:05:30.767 20:52:21 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:30.767 20:52:21 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:30.767 20:52:21 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:30.767 20:52:21 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:30.767 20:52:21 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:30.767 20:52:21 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:30.767 20:52:21 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:30.767 20:52:21 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:30.767 20:52:21 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:30.767 20:52:21 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:05:30.767 20:52:21 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:30.767 20:52:21 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:30.767 20:52:21 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:30.767 20:52:21 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:30.767 20:52:21 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:30.767 20:52:21 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:30.767 20:52:21 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:30.767 20:52:21 -- common/autotest_common.sh@1553 -- # continue 00:05:30.767 20:52:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:30.767 20:52:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.767 20:52:21 -- common/autotest_common.sh@10 -- # set +x 00:05:30.767 20:52:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:30.767 20:52:21 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:30.767 20:52:21 -- common/autotest_common.sh@10 -- # set +x 00:05:30.767 20:52:21 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:34.073 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:34.073 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:34.073 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:34.073 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:34.073 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:34.073 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:34.073 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:34.073 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:34.332 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:34.332 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:34.332 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:34.332 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:34.332 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:34.332 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:34.332 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:34.332 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:36.261 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:36.261 20:52:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:36.261 20:52:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.261 20:52:27 -- common/autotest_common.sh@10 -- # set +x 00:05:36.542 20:52:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:36.542 20:52:27 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:36.542 20:52:27 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.542 20:52:27 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:36.542 20:52:27 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:36.542 20:52:27 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:36.542 20:52:27 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:36.542 20:52:27 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:36.542 20:52:27 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.542 20:52:27 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:36.542 20:52:27 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:36.542 20:52:27 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:36.542 20:52:27 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:05:36.542 20:52:27 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:36.542 20:52:27 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:36.542 20:52:27 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:36.542 20:52:27 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:36.542 20:52:27 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:36.542 20:52:27 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:d8:00.0 00:05:36.542 20:52:27 -- common/autotest_common.sh@1588 -- # [[ -z 0000:d8:00.0 ]] 00:05:36.542 20:52:27 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3349806 00:05:36.542 20:52:27 -- common/autotest_common.sh@1594 -- # waitforlisten 3349806 00:05:36.542 20:52:27 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.542 20:52:27 -- common/autotest_common.sh@827 -- # '[' -z 3349806 ']' 00:05:36.542 20:52:27 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.542 20:52:27 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.542 20:52:27 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.542 20:52:27 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.542 20:52:27 -- common/autotest_common.sh@10 -- # set +x 00:05:36.542 [2024-07-13 20:52:27.375708] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:36.543 [2024-07-13 20:52:27.375767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349806 ] 00:05:36.543 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.801 [2024-07-13 20:52:27.449366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.801 [2024-07-13 20:52:27.488882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.369 20:52:28 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.369 20:52:28 -- common/autotest_common.sh@860 -- # return 0 00:05:37.369 20:52:28 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:37.369 20:52:28 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:37.369 20:52:28 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:40.658 nvme0n1 00:05:40.658 20:52:31 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:40.658 [2024-07-13 20:52:31.301902] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:40.658 request: 00:05:40.658 { 00:05:40.658 "nvme_ctrlr_name": "nvme0", 00:05:40.658 "password": "test", 00:05:40.658 "method": "bdev_nvme_opal_revert", 00:05:40.658 "req_id": 1 00:05:40.658 } 00:05:40.658 Got JSON-RPC error response 00:05:40.658 response: 00:05:40.658 { 00:05:40.658 "code": -32602, 00:05:40.658 "message": "Invalid parameters" 00:05:40.658 } 00:05:40.658 20:52:31 -- common/autotest_common.sh@1600 -- # true 00:05:40.658 20:52:31 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:40.658 20:52:31 -- common/autotest_common.sh@1604 -- # killprocess 3349806 00:05:40.658 20:52:31 -- common/autotest_common.sh@946 -- # '[' -z 3349806 ']' 00:05:40.658 20:52:31 -- common/autotest_common.sh@950 -- # kill -0 3349806 00:05:40.658 20:52:31 -- common/autotest_common.sh@951 -- # uname 00:05:40.658 20:52:31 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.658 20:52:31 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3349806 00:05:40.658 20:52:31 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.658 20:52:31 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.658 20:52:31 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3349806' 00:05:40.658 killing process with pid 3349806 00:05:40.658 20:52:31 -- common/autotest_common.sh@965 -- # kill 3349806 00:05:40.658 20:52:31 -- common/autotest_common.sh@970 -- # wait 3349806 00:05:43.192 20:52:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:43.192 20:52:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:43.192 20:52:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:43.192 20:52:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:43.192 20:52:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:43.192 20:52:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:43.192 20:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:43.192 20:52:33 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:43.192 20:52:33 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:43.192 20:52:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.192 20:52:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.192 20:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:43.192 ************************************ 00:05:43.192 START TEST env 00:05:43.192 ************************************ 00:05:43.192 20:52:33 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:43.192 * Looking for test storage... 00:05:43.192 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:43.192 20:52:34 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:43.192 20:52:34 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.192 20:52:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.192 20:52:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.451 ************************************ 00:05:43.451 START TEST env_memory 00:05:43.451 ************************************ 00:05:43.451 20:52:34 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:43.451 00:05:43.451 00:05:43.451 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.451 http://cunit.sourceforge.net/ 00:05:43.451 00:05:43.451 00:05:43.451 Suite: memory 00:05:43.451 Test: alloc and free memory map ...[2024-07-13 20:52:34.164719] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:43.451 passed 00:05:43.451 Test: mem map translation ...[2024-07-13 20:52:34.183200] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:43.451 [2024-07-13 20:52:34.183215] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:43.451 [2024-07-13 20:52:34.183250] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:43.451 [2024-07-13 20:52:34.183258] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:43.451 passed 00:05:43.451 Test: mem map registration ...[2024-07-13 20:52:34.218127] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:43.451 [2024-07-13 20:52:34.218142] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:43.451 passed 00:05:43.451 Test: mem map adjacent registrations ...passed 00:05:43.451 00:05:43.451 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.451 suites 1 1 n/a 0 0 00:05:43.451 tests 4 4 4 0 0 00:05:43.451 asserts 152 152 152 0 n/a 00:05:43.451 00:05:43.451 Elapsed time = 0.132 seconds 00:05:43.451 00:05:43.451 real 0m0.146s 00:05:43.451 user 0m0.136s 00:05:43.451 sys 0m0.010s 00:05:43.451 20:52:34 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.451 20:52:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:43.451 ************************************ 00:05:43.451 END TEST env_memory 00:05:43.451 ************************************ 00:05:43.451 20:52:34 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:43.451 20:52:34 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.451 20:52:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.451 20:52:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.711 ************************************ 00:05:43.711 START TEST env_vtophys 00:05:43.711 ************************************ 00:05:43.711 20:52:34 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:43.711 EAL: lib.eal log level changed from notice to debug 00:05:43.711 EAL: Detected lcore 0 as core 0 on socket 0 00:05:43.712 EAL: Detected lcore 1 as core 1 on socket 0 00:05:43.712 EAL: Detected lcore 2 as core 2 on socket 0 00:05:43.712 EAL: Detected lcore 3 as core 3 on socket 0 00:05:43.712 EAL: Detected lcore 4 as core 4 on socket 0 00:05:43.712 EAL: Detected lcore 5 as core 5 on socket 0 00:05:43.712 EAL: Detected lcore 6 as core 6 on socket 0 00:05:43.712 EAL: Detected lcore 7 as core 8 on socket 0 00:05:43.712 EAL: Detected lcore 8 as core 9 on socket 0 00:05:43.712 EAL: Detected lcore 9 as core 10 on socket 0 00:05:43.712 EAL: Detected lcore 10 as core 11 on socket 0 00:05:43.712 EAL: Detected lcore 11 as core 12 on socket 0 00:05:43.712 EAL: Detected lcore 12 as core 13 on socket 0 00:05:43.712 EAL: Detected lcore 13 as core 14 on socket 0 00:05:43.712 EAL: Detected lcore 14 as core 16 on socket 0 00:05:43.712 EAL: Detected lcore 15 as core 17 on socket 0 00:05:43.712 EAL: Detected lcore 16 as core 18 on socket 0 00:05:43.712 EAL: Detected lcore 17 as core 19 on socket 0 00:05:43.712 EAL: Detected lcore 18 as core 20 on socket 0 00:05:43.712 EAL: Detected lcore 19 as core 21 on socket 0 00:05:43.712 EAL: Detected lcore 20 as core 22 on socket 0 00:05:43.712 EAL: Detected lcore 21 as core 24 on socket 0 00:05:43.712 EAL: Detected lcore 22 as core 25 on socket 0 00:05:43.712 EAL: Detected lcore 23 as core 26 on socket 0 00:05:43.712 EAL: Detected lcore 24 as core 27 on socket 0 00:05:43.712 EAL: Detected lcore 25 as core 28 on socket 0 00:05:43.712 EAL: Detected lcore 26 as core 29 on socket 0 00:05:43.712 EAL: Detected lcore 27 as core 30 on socket 0 00:05:43.712 EAL: Detected lcore 28 as core 0 on socket 1 00:05:43.712 EAL: Detected lcore 29 as core 1 on socket 1 00:05:43.712 EAL: Detected lcore 30 as core 2 on socket 1 00:05:43.712 EAL: Detected lcore 31 as core 3 on socket 1 00:05:43.712 EAL: Detected lcore 32 as core 4 on socket 1 00:05:43.712 EAL: Detected lcore 33 as core 5 on socket 1 00:05:43.712 EAL: Detected lcore 34 as core 6 on socket 1 00:05:43.712 EAL: Detected lcore 35 as core 8 on socket 1 00:05:43.712 EAL: Detected lcore 36 as core 9 on socket 1 00:05:43.712 EAL: Detected lcore 37 as core 10 on socket 1 00:05:43.712 EAL: Detected lcore 38 as core 11 on socket 1 00:05:43.712 EAL: Detected lcore 39 as core 12 on socket 1 00:05:43.712 EAL: Detected lcore 40 as core 13 on socket 1 00:05:43.712 EAL: Detected lcore 41 as core 14 on socket 1 00:05:43.712 EAL: Detected lcore 42 as core 16 on socket 1 00:05:43.712 EAL: Detected lcore 43 as core 17 on socket 1 00:05:43.712 EAL: Detected lcore 44 as core 18 on socket 1 00:05:43.712 EAL: Detected lcore 45 as core 19 on socket 1 00:05:43.712 EAL: Detected lcore 46 as core 20 on socket 1 00:05:43.712 EAL: Detected lcore 47 as core 21 on socket 1 00:05:43.712 EAL: Detected lcore 48 as core 22 on socket 1 00:05:43.712 EAL: Detected lcore 49 as core 24 on socket 1 00:05:43.712 EAL: Detected lcore 50 as core 25 on socket 1 00:05:43.712 EAL: Detected lcore 51 as core 26 on socket 1 00:05:43.712 EAL: Detected lcore 52 as core 27 on socket 1 00:05:43.712 EAL: Detected lcore 53 as core 28 on socket 1 00:05:43.712 EAL: Detected lcore 54 as core 29 on socket 1 00:05:43.712 EAL: Detected lcore 55 as core 30 on socket 1 00:05:43.712 EAL: Detected lcore 56 as core 0 on socket 0 00:05:43.712 EAL: Detected lcore 57 as core 1 on socket 0 00:05:43.712 EAL: Detected lcore 58 as core 2 on socket 0 00:05:43.712 EAL: Detected lcore 59 as core 3 on socket 0 00:05:43.712 EAL: Detected lcore 60 as core 4 on socket 0 00:05:43.712 EAL: Detected lcore 61 as core 5 on socket 0 00:05:43.712 EAL: Detected lcore 62 as core 6 on socket 0 00:05:43.712 EAL: Detected lcore 63 as core 8 on socket 0 00:05:43.712 EAL: Detected lcore 64 as core 9 on socket 0 00:05:43.712 EAL: Detected lcore 65 as core 10 on socket 0 00:05:43.712 EAL: Detected lcore 66 as core 11 on socket 0 00:05:43.712 EAL: Detected lcore 67 as core 12 on socket 0 00:05:43.712 EAL: Detected lcore 68 as core 13 on socket 0 00:05:43.712 EAL: Detected lcore 69 as core 14 on socket 0 00:05:43.712 EAL: Detected lcore 70 as core 16 on socket 0 00:05:43.712 EAL: Detected lcore 71 as core 17 on socket 0 00:05:43.712 EAL: Detected lcore 72 as core 18 on socket 0 00:05:43.712 EAL: Detected lcore 73 as core 19 on socket 0 00:05:43.712 EAL: Detected lcore 74 as core 20 on socket 0 00:05:43.712 EAL: Detected lcore 75 as core 21 on socket 0 00:05:43.712 EAL: Detected lcore 76 as core 22 on socket 0 00:05:43.712 EAL: Detected lcore 77 as core 24 on socket 0 00:05:43.712 EAL: Detected lcore 78 as core 25 on socket 0 00:05:43.712 EAL: Detected lcore 79 as core 26 on socket 0 00:05:43.712 EAL: Detected lcore 80 as core 27 on socket 0 00:05:43.712 EAL: Detected lcore 81 as core 28 on socket 0 00:05:43.712 EAL: Detected lcore 82 as core 29 on socket 0 00:05:43.712 EAL: Detected lcore 83 as core 30 on socket 0 00:05:43.712 EAL: Detected lcore 84 as core 0 on socket 1 00:05:43.712 EAL: Detected lcore 85 as core 1 on socket 1 00:05:43.712 EAL: Detected lcore 86 as core 2 on socket 1 00:05:43.712 EAL: Detected lcore 87 as core 3 on socket 1 00:05:43.712 EAL: Detected lcore 88 as core 4 on socket 1 00:05:43.712 EAL: Detected lcore 89 as core 5 on socket 1 00:05:43.712 EAL: Detected lcore 90 as core 6 on socket 1 00:05:43.712 EAL: Detected lcore 91 as core 8 on socket 1 00:05:43.712 EAL: Detected lcore 92 as core 9 on socket 1 00:05:43.712 EAL: Detected lcore 93 as core 10 on socket 1 00:05:43.712 EAL: Detected lcore 94 as core 11 on socket 1 00:05:43.712 EAL: Detected lcore 95 as core 12 on socket 1 00:05:43.712 EAL: Detected lcore 96 as core 13 on socket 1 00:05:43.712 EAL: Detected lcore 97 as core 14 on socket 1 00:05:43.712 EAL: Detected lcore 98 as core 16 on socket 1 00:05:43.712 EAL: Detected lcore 99 as core 17 on socket 1 00:05:43.712 EAL: Detected lcore 100 as core 18 on socket 1 00:05:43.712 EAL: Detected lcore 101 as core 19 on socket 1 00:05:43.712 EAL: Detected lcore 102 as core 20 on socket 1 00:05:43.712 EAL: Detected lcore 103 as core 21 on socket 1 00:05:43.712 EAL: Detected lcore 104 as core 22 on socket 1 00:05:43.712 EAL: Detected lcore 105 as core 24 on socket 1 00:05:43.712 EAL: Detected lcore 106 as core 25 on socket 1 00:05:43.712 EAL: Detected lcore 107 as core 26 on socket 1 00:05:43.712 EAL: Detected lcore 108 as core 27 on socket 1 00:05:43.712 EAL: Detected lcore 109 as core 28 on socket 1 00:05:43.712 EAL: Detected lcore 110 as core 29 on socket 1 00:05:43.712 EAL: Detected lcore 111 as core 30 on socket 1 00:05:43.712 EAL: Maximum logical cores by configuration: 128 00:05:43.712 EAL: Detected CPU lcores: 112 00:05:43.712 EAL: Detected NUMA nodes: 2 00:05:43.712 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:43.712 EAL: Detected shared linkage of DPDK 00:05:43.712 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:43.712 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:43.712 EAL: Registered [vdev] bus. 00:05:43.712 EAL: bus.vdev log level changed from disabled to notice 00:05:43.712 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:43.712 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:43.712 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:43.712 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:43.712 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:43.712 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:43.712 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:43.712 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:43.712 EAL: No shared files mode enabled, IPC will be disabled 00:05:43.712 EAL: No shared files mode enabled, IPC is disabled 00:05:43.712 EAL: Bus pci wants IOVA as 'DC' 00:05:43.712 EAL: Bus vdev wants IOVA as 'DC' 00:05:43.712 EAL: Buses did not request a specific IOVA mode. 00:05:43.712 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:43.712 EAL: Selected IOVA mode 'VA' 00:05:43.712 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.712 EAL: Probing VFIO support... 00:05:43.712 EAL: IOMMU type 1 (Type 1) is supported 00:05:43.712 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:43.712 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:43.712 EAL: VFIO support initialized 00:05:43.712 EAL: Ask a virtual area of 0x2e000 bytes 00:05:43.712 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:43.712 EAL: Setting up physically contiguous memory... 00:05:43.712 EAL: Setting maximum number of open files to 524288 00:05:43.712 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:43.712 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:43.712 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:43.712 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.712 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:43.712 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.712 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.712 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:43.712 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:43.712 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.712 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:43.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.713 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:43.713 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:43.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.713 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:43.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.713 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:43.713 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:43.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.713 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:43.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.713 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:43.713 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:43.713 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:43.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.713 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:43.713 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:43.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.713 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:43.713 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:43.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.713 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:43.713 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:43.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.713 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:43.713 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:43.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.713 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:43.713 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:43.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.713 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:43.713 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:43.713 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.713 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:43.713 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:43.713 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.713 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:43.713 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:43.713 EAL: Hugepages will be freed exactly as allocated. 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: TSC frequency is ~2500000 KHz 00:05:43.713 EAL: Main lcore 0 is ready (tid=7f00ba4f2a00;cpuset=[0]) 00:05:43.713 EAL: Trying to obtain current memory policy. 00:05:43.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.713 EAL: Restoring previous memory policy: 0 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was expanded by 2MB 00:05:43.713 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:05:43.713 EAL: probe driver: 8086:37d2 net_i40e 00:05:43.713 EAL: Not managed by a supported kernel driver, skipped 00:05:43.713 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:05:43.713 EAL: probe driver: 8086:37d2 net_i40e 00:05:43.713 EAL: Not managed by a supported kernel driver, skipped 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:43.713 EAL: Mem event callback 'spdk:(nil)' registered 00:05:43.713 00:05:43.713 00:05:43.713 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.713 http://cunit.sourceforge.net/ 00:05:43.713 00:05:43.713 00:05:43.713 Suite: components_suite 00:05:43.713 Test: vtophys_malloc_test ...passed 00:05:43.713 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:43.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.713 EAL: Restoring previous memory policy: 4 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was expanded by 4MB 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was shrunk by 4MB 00:05:43.713 EAL: Trying to obtain current memory policy. 00:05:43.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.713 EAL: Restoring previous memory policy: 4 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was expanded by 6MB 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was shrunk by 6MB 00:05:43.713 EAL: Trying to obtain current memory policy. 00:05:43.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.713 EAL: Restoring previous memory policy: 4 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was expanded by 10MB 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was shrunk by 10MB 00:05:43.713 EAL: Trying to obtain current memory policy. 00:05:43.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.713 EAL: Restoring previous memory policy: 4 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was expanded by 18MB 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was shrunk by 18MB 00:05:43.713 EAL: Trying to obtain current memory policy. 00:05:43.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.713 EAL: Restoring previous memory policy: 4 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was expanded by 34MB 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was shrunk by 34MB 00:05:43.713 EAL: Trying to obtain current memory policy. 00:05:43.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.713 EAL: Restoring previous memory policy: 4 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was expanded by 66MB 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was shrunk by 66MB 00:05:43.713 EAL: Trying to obtain current memory policy. 00:05:43.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.713 EAL: Restoring previous memory policy: 4 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was expanded by 130MB 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was shrunk by 130MB 00:05:43.713 EAL: Trying to obtain current memory policy. 00:05:43.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.713 EAL: Restoring previous memory policy: 4 00:05:43.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.713 EAL: request: mp_malloc_sync 00:05:43.713 EAL: No shared files mode enabled, IPC is disabled 00:05:43.713 EAL: Heap on socket 0 was expanded by 258MB 00:05:43.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.974 EAL: request: mp_malloc_sync 00:05:43.974 EAL: No shared files mode enabled, IPC is disabled 00:05:43.974 EAL: Heap on socket 0 was shrunk by 258MB 00:05:43.974 EAL: Trying to obtain current memory policy. 00:05:43.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.974 EAL: Restoring previous memory policy: 4 00:05:43.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.974 EAL: request: mp_malloc_sync 00:05:43.974 EAL: No shared files mode enabled, IPC is disabled 00:05:43.974 EAL: Heap on socket 0 was expanded by 514MB 00:05:43.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.232 EAL: request: mp_malloc_sync 00:05:44.232 EAL: No shared files mode enabled, IPC is disabled 00:05:44.232 EAL: Heap on socket 0 was shrunk by 514MB 00:05:44.232 EAL: Trying to obtain current memory policy. 00:05:44.232 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.232 EAL: Restoring previous memory policy: 4 00:05:44.232 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.232 EAL: request: mp_malloc_sync 00:05:44.232 EAL: No shared files mode enabled, IPC is disabled 00:05:44.232 EAL: Heap on socket 0 was expanded by 1026MB 00:05:44.491 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.750 EAL: request: mp_malloc_sync 00:05:44.750 EAL: No shared files mode enabled, IPC is disabled 00:05:44.750 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:44.750 passed 00:05:44.750 00:05:44.750 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.750 suites 1 1 n/a 0 0 00:05:44.750 tests 2 2 2 0 0 00:05:44.750 asserts 497 497 497 0 n/a 00:05:44.750 00:05:44.750 Elapsed time = 0.964 seconds 00:05:44.750 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.750 EAL: request: mp_malloc_sync 00:05:44.750 EAL: No shared files mode enabled, IPC is disabled 00:05:44.750 EAL: Heap on socket 0 was shrunk by 2MB 00:05:44.750 EAL: No shared files mode enabled, IPC is disabled 00:05:44.750 EAL: No shared files mode enabled, IPC is disabled 00:05:44.750 EAL: No shared files mode enabled, IPC is disabled 00:05:44.750 00:05:44.750 real 0m1.087s 00:05:44.750 user 0m0.647s 00:05:44.750 sys 0m0.417s 00:05:44.750 20:52:35 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.750 20:52:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:44.750 ************************************ 00:05:44.750 END TEST env_vtophys 00:05:44.750 ************************************ 00:05:44.750 20:52:35 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:44.750 20:52:35 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.750 20:52:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.750 20:52:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.750 ************************************ 00:05:44.750 START TEST env_pci 00:05:44.750 ************************************ 00:05:44.750 20:52:35 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:44.750 00:05:44.750 00:05:44.750 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.750 http://cunit.sourceforge.net/ 00:05:44.750 00:05:44.750 00:05:44.750 Suite: pci 00:05:44.750 Test: pci_hook ...[2024-07-13 20:52:35.537168] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3351345 has claimed it 00:05:44.750 EAL: Cannot find device (10000:00:01.0) 00:05:44.750 EAL: Failed to attach device on primary process 00:05:44.750 passed 00:05:44.750 00:05:44.750 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.750 suites 1 1 n/a 0 0 00:05:44.750 tests 1 1 1 0 0 00:05:44.750 asserts 25 25 25 0 n/a 00:05:44.750 00:05:44.750 Elapsed time = 0.034 seconds 00:05:44.750 00:05:44.750 real 0m0.056s 00:05:44.750 user 0m0.019s 00:05:44.750 sys 0m0.037s 00:05:44.750 20:52:35 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.750 20:52:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:44.750 ************************************ 00:05:44.750 END TEST env_pci 00:05:44.750 ************************************ 00:05:44.750 20:52:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:44.750 20:52:35 env -- env/env.sh@15 -- # uname 00:05:44.750 20:52:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:44.750 20:52:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:44.750 20:52:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:44.750 20:52:35 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:44.750 20:52:35 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.750 20:52:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.009 ************************************ 00:05:45.009 START TEST env_dpdk_post_init 00:05:45.009 ************************************ 00:05:45.009 20:52:35 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.009 EAL: Detected CPU lcores: 112 00:05:45.009 EAL: Detected NUMA nodes: 2 00:05:45.009 EAL: Detected shared linkage of DPDK 00:05:45.009 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.009 EAL: Selected IOVA mode 'VA' 00:05:45.009 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.009 EAL: VFIO support initialized 00:05:45.009 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:45.009 EAL: Using IOMMU type 1 (Type 1) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:45.009 EAL: Ignore mapping IO port bar(1) 00:05:45.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:45.268 EAL: Ignore mapping IO port bar(1) 00:05:45.269 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:45.269 EAL: Ignore mapping IO port bar(1) 00:05:45.269 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:45.269 EAL: Ignore mapping IO port bar(1) 00:05:45.269 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:45.269 EAL: Ignore mapping IO port bar(1) 00:05:45.269 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:45.269 EAL: Ignore mapping IO port bar(1) 00:05:45.269 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:45.269 EAL: Ignore mapping IO port bar(1) 00:05:45.269 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:45.838 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:50.074 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:50.074 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:50.361 Starting DPDK initialization... 00:05:50.361 Starting SPDK post initialization... 00:05:50.361 SPDK NVMe probe 00:05:50.361 Attaching to 0000:d8:00.0 00:05:50.361 Attached to 0000:d8:00.0 00:05:50.361 Cleaning up... 00:05:50.361 00:05:50.361 real 0m5.325s 00:05:50.361 user 0m3.977s 00:05:50.361 sys 0m0.407s 00:05:50.361 20:52:40 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.361 20:52:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.361 ************************************ 00:05:50.361 END TEST env_dpdk_post_init 00:05:50.361 ************************************ 00:05:50.361 20:52:41 env -- env/env.sh@26 -- # uname 00:05:50.361 20:52:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:50.361 20:52:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:50.361 20:52:41 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.361 20:52:41 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.361 20:52:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.361 ************************************ 00:05:50.361 START TEST env_mem_callbacks 00:05:50.361 ************************************ 00:05:50.361 20:52:41 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:50.361 EAL: Detected CPU lcores: 112 00:05:50.361 EAL: Detected NUMA nodes: 2 00:05:50.361 EAL: Detected shared linkage of DPDK 00:05:50.361 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:50.361 EAL: Selected IOVA mode 'VA' 00:05:50.361 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.361 EAL: VFIO support initialized 00:05:50.361 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:50.361 00:05:50.361 00:05:50.361 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.361 http://cunit.sourceforge.net/ 00:05:50.361 00:05:50.361 00:05:50.361 Suite: memory 00:05:50.361 Test: test ... 00:05:50.361 register 0x200000200000 2097152 00:05:50.361 malloc 3145728 00:05:50.361 register 0x200000400000 4194304 00:05:50.361 buf 0x200000500000 len 3145728 PASSED 00:05:50.361 malloc 64 00:05:50.361 buf 0x2000004fff40 len 64 PASSED 00:05:50.361 malloc 4194304 00:05:50.361 register 0x200000800000 6291456 00:05:50.361 buf 0x200000a00000 len 4194304 PASSED 00:05:50.361 free 0x200000500000 3145728 00:05:50.361 free 0x2000004fff40 64 00:05:50.361 unregister 0x200000400000 4194304 PASSED 00:05:50.361 free 0x200000a00000 4194304 00:05:50.361 unregister 0x200000800000 6291456 PASSED 00:05:50.361 malloc 8388608 00:05:50.361 register 0x200000400000 10485760 00:05:50.361 buf 0x200000600000 len 8388608 PASSED 00:05:50.361 free 0x200000600000 8388608 00:05:50.361 unregister 0x200000400000 10485760 PASSED 00:05:50.361 passed 00:05:50.361 00:05:50.361 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.361 suites 1 1 n/a 0 0 00:05:50.361 tests 1 1 1 0 0 00:05:50.361 asserts 15 15 15 0 n/a 00:05:50.361 00:05:50.361 Elapsed time = 0.006 seconds 00:05:50.361 00:05:50.361 real 0m0.068s 00:05:50.361 user 0m0.023s 00:05:50.361 sys 0m0.044s 00:05:50.361 20:52:41 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.361 20:52:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:50.361 ************************************ 00:05:50.361 END TEST env_mem_callbacks 00:05:50.361 ************************************ 00:05:50.361 00:05:50.361 real 0m7.209s 00:05:50.361 user 0m5.015s 00:05:50.361 sys 0m1.269s 00:05:50.361 20:52:41 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.361 20:52:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.361 ************************************ 00:05:50.361 END TEST env 00:05:50.361 ************************************ 00:05:50.361 20:52:41 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:50.361 20:52:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.361 20:52:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.361 20:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:50.621 ************************************ 00:05:50.621 START TEST rpc 00:05:50.621 ************************************ 00:05:50.621 20:52:41 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:50.621 * Looking for test storage... 00:05:50.621 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:50.621 20:52:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3352532 00:05:50.621 20:52:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.621 20:52:41 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:50.621 20:52:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3352532 00:05:50.621 20:52:41 rpc -- common/autotest_common.sh@827 -- # '[' -z 3352532 ']' 00:05:50.621 20:52:41 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.621 20:52:41 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.621 20:52:41 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.621 20:52:41 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.621 20:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.621 [2024-07-13 20:52:41.431402] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:50.621 [2024-07-13 20:52:41.431456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352532 ] 00:05:50.621 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.621 [2024-07-13 20:52:41.503977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.881 [2024-07-13 20:52:41.543792] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:50.881 [2024-07-13 20:52:41.543836] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3352532' to capture a snapshot of events at runtime. 00:05:50.881 [2024-07-13 20:52:41.543846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.881 [2024-07-13 20:52:41.543855] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.881 [2024-07-13 20:52:41.543862] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3352532 for offline analysis/debug. 00:05:50.881 [2024-07-13 20:52:41.543891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.450 20:52:42 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:51.450 20:52:42 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:51.450 20:52:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:51.450 20:52:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:51.450 20:52:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:51.450 20:52:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:51.450 20:52:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.450 20:52:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.450 20:52:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.450 ************************************ 00:05:51.450 START TEST rpc_integrity 00:05:51.450 ************************************ 00:05:51.450 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:51.450 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.450 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.450 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.450 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.450 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.450 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.450 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.450 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.450 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.450 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.450 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.450 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:51.450 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.450 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.450 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.710 { 00:05:51.710 "name": "Malloc0", 00:05:51.710 "aliases": [ 00:05:51.710 "169354ae-c4a0-417c-9d61-e1f15f967730" 00:05:51.710 ], 00:05:51.710 "product_name": "Malloc disk", 00:05:51.710 "block_size": 512, 00:05:51.710 "num_blocks": 16384, 00:05:51.710 "uuid": "169354ae-c4a0-417c-9d61-e1f15f967730", 00:05:51.710 "assigned_rate_limits": { 00:05:51.710 "rw_ios_per_sec": 0, 00:05:51.710 "rw_mbytes_per_sec": 0, 00:05:51.710 "r_mbytes_per_sec": 0, 00:05:51.710 "w_mbytes_per_sec": 0 00:05:51.710 }, 00:05:51.710 "claimed": false, 00:05:51.710 "zoned": false, 00:05:51.710 "supported_io_types": { 00:05:51.710 "read": true, 00:05:51.710 "write": true, 00:05:51.710 "unmap": true, 00:05:51.710 "write_zeroes": true, 00:05:51.710 "flush": true, 00:05:51.710 "reset": true, 00:05:51.710 "compare": false, 00:05:51.710 "compare_and_write": false, 00:05:51.710 "abort": true, 00:05:51.710 "nvme_admin": false, 00:05:51.710 "nvme_io": false 00:05:51.710 }, 00:05:51.710 "memory_domains": [ 00:05:51.710 { 00:05:51.710 "dma_device_id": "system", 00:05:51.710 "dma_device_type": 1 00:05:51.710 }, 00:05:51.710 { 00:05:51.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.710 "dma_device_type": 2 00:05:51.710 } 00:05:51.710 ], 00:05:51.710 "driver_specific": {} 00:05:51.710 } 00:05:51.710 ]' 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 [2024-07-13 20:52:42.390312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:51.710 [2024-07-13 20:52:42.390343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.710 [2024-07-13 20:52:42.390358] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f18840 00:05:51.710 [2024-07-13 20:52:42.390367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.710 [2024-07-13 20:52:42.391436] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.710 [2024-07-13 20:52:42.391458] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.710 Passthru0 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.710 { 00:05:51.710 "name": "Malloc0", 00:05:51.710 "aliases": [ 00:05:51.710 "169354ae-c4a0-417c-9d61-e1f15f967730" 00:05:51.710 ], 00:05:51.710 "product_name": "Malloc disk", 00:05:51.710 "block_size": 512, 00:05:51.710 "num_blocks": 16384, 00:05:51.710 "uuid": "169354ae-c4a0-417c-9d61-e1f15f967730", 00:05:51.710 "assigned_rate_limits": { 00:05:51.710 "rw_ios_per_sec": 0, 00:05:51.710 "rw_mbytes_per_sec": 0, 00:05:51.710 "r_mbytes_per_sec": 0, 00:05:51.710 "w_mbytes_per_sec": 0 00:05:51.710 }, 00:05:51.710 "claimed": true, 00:05:51.710 "claim_type": "exclusive_write", 00:05:51.710 "zoned": false, 00:05:51.710 "supported_io_types": { 00:05:51.710 "read": true, 00:05:51.710 "write": true, 00:05:51.710 "unmap": true, 00:05:51.710 "write_zeroes": true, 00:05:51.710 "flush": true, 00:05:51.710 "reset": true, 00:05:51.710 "compare": false, 00:05:51.710 "compare_and_write": false, 00:05:51.710 "abort": true, 00:05:51.710 "nvme_admin": false, 00:05:51.710 "nvme_io": false 00:05:51.710 }, 00:05:51.710 "memory_domains": [ 00:05:51.710 { 00:05:51.710 "dma_device_id": "system", 00:05:51.710 "dma_device_type": 1 00:05:51.710 }, 00:05:51.710 { 00:05:51.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.710 "dma_device_type": 2 00:05:51.710 } 00:05:51.710 ], 00:05:51.710 "driver_specific": {} 00:05:51.710 }, 00:05:51.710 { 00:05:51.710 "name": "Passthru0", 00:05:51.710 "aliases": [ 00:05:51.710 "907efefb-bde7-528d-9084-54294d787a33" 00:05:51.710 ], 00:05:51.710 "product_name": "passthru", 00:05:51.710 "block_size": 512, 00:05:51.710 "num_blocks": 16384, 00:05:51.710 "uuid": "907efefb-bde7-528d-9084-54294d787a33", 00:05:51.710 "assigned_rate_limits": { 00:05:51.710 "rw_ios_per_sec": 0, 00:05:51.710 "rw_mbytes_per_sec": 0, 00:05:51.710 "r_mbytes_per_sec": 0, 00:05:51.710 "w_mbytes_per_sec": 0 00:05:51.710 }, 00:05:51.710 "claimed": false, 00:05:51.710 "zoned": false, 00:05:51.710 "supported_io_types": { 00:05:51.710 "read": true, 00:05:51.710 "write": true, 00:05:51.710 "unmap": true, 00:05:51.710 "write_zeroes": true, 00:05:51.710 "flush": true, 00:05:51.710 "reset": true, 00:05:51.710 "compare": false, 00:05:51.710 "compare_and_write": false, 00:05:51.710 "abort": true, 00:05:51.710 "nvme_admin": false, 00:05:51.710 "nvme_io": false 00:05:51.710 }, 00:05:51.710 "memory_domains": [ 00:05:51.710 { 00:05:51.710 "dma_device_id": "system", 00:05:51.710 "dma_device_type": 1 00:05:51.710 }, 00:05:51.710 { 00:05:51.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.710 "dma_device_type": 2 00:05:51.710 } 00:05:51.710 ], 00:05:51.710 "driver_specific": { 00:05:51.710 "passthru": { 00:05:51.710 "name": "Passthru0", 00:05:51.710 "base_bdev_name": "Malloc0" 00:05:51.710 } 00:05:51.710 } 00:05:51.710 } 00:05:51.710 ]' 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.710 20:52:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.710 00:05:51.710 real 0m0.265s 00:05:51.710 user 0m0.166s 00:05:51.710 sys 0m0.049s 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.710 20:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 ************************************ 00:05:51.710 END TEST rpc_integrity 00:05:51.710 ************************************ 00:05:51.710 20:52:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:51.711 20:52:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.711 20:52:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.711 20:52:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.970 ************************************ 00:05:51.970 START TEST rpc_plugins 00:05:51.970 ************************************ 00:05:51.970 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:51.970 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:51.970 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.970 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.970 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.970 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:51.970 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:51.970 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.970 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.970 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.970 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:51.970 { 00:05:51.970 "name": "Malloc1", 00:05:51.970 "aliases": [ 00:05:51.970 "483db2be-ab29-4889-975c-69636924cd53" 00:05:51.970 ], 00:05:51.970 "product_name": "Malloc disk", 00:05:51.970 "block_size": 4096, 00:05:51.970 "num_blocks": 256, 00:05:51.970 "uuid": "483db2be-ab29-4889-975c-69636924cd53", 00:05:51.970 "assigned_rate_limits": { 00:05:51.970 "rw_ios_per_sec": 0, 00:05:51.970 "rw_mbytes_per_sec": 0, 00:05:51.971 "r_mbytes_per_sec": 0, 00:05:51.971 "w_mbytes_per_sec": 0 00:05:51.971 }, 00:05:51.971 "claimed": false, 00:05:51.971 "zoned": false, 00:05:51.971 "supported_io_types": { 00:05:51.971 "read": true, 00:05:51.971 "write": true, 00:05:51.971 "unmap": true, 00:05:51.971 "write_zeroes": true, 00:05:51.971 "flush": true, 00:05:51.971 "reset": true, 00:05:51.971 "compare": false, 00:05:51.971 "compare_and_write": false, 00:05:51.971 "abort": true, 00:05:51.971 "nvme_admin": false, 00:05:51.971 "nvme_io": false 00:05:51.971 }, 00:05:51.971 "memory_domains": [ 00:05:51.971 { 00:05:51.971 "dma_device_id": "system", 00:05:51.971 "dma_device_type": 1 00:05:51.971 }, 00:05:51.971 { 00:05:51.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.971 "dma_device_type": 2 00:05:51.971 } 00:05:51.971 ], 00:05:51.971 "driver_specific": {} 00:05:51.971 } 00:05:51.971 ]' 00:05:51.971 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:51.971 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:51.971 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:51.971 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.971 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.971 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:51.971 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.971 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.971 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:51.971 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:51.971 20:52:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:51.971 00:05:51.971 real 0m0.134s 00:05:51.971 user 0m0.086s 00:05:51.971 sys 0m0.022s 00:05:51.971 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.971 20:52:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 ************************************ 00:05:51.971 END TEST rpc_plugins 00:05:51.971 ************************************ 00:05:51.971 20:52:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:51.971 20:52:42 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.971 20:52:42 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.971 20:52:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 ************************************ 00:05:51.971 START TEST rpc_trace_cmd_test 00:05:51.971 ************************************ 00:05:51.971 20:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:51.971 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:51.971 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:51.971 20:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.971 20:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:51.971 20:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.971 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:51.971 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3352532", 00:05:51.971 "tpoint_group_mask": "0x8", 00:05:51.971 "iscsi_conn": { 00:05:51.971 "mask": "0x2", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "scsi": { 00:05:51.971 "mask": "0x4", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "bdev": { 00:05:51.971 "mask": "0x8", 00:05:51.971 "tpoint_mask": "0xffffffffffffffff" 00:05:51.971 }, 00:05:51.971 "nvmf_rdma": { 00:05:51.971 "mask": "0x10", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "nvmf_tcp": { 00:05:51.971 "mask": "0x20", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "ftl": { 00:05:51.971 "mask": "0x40", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "blobfs": { 00:05:51.971 "mask": "0x80", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "dsa": { 00:05:51.971 "mask": "0x200", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "thread": { 00:05:51.971 "mask": "0x400", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "nvme_pcie": { 00:05:51.971 "mask": "0x800", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "iaa": { 00:05:51.971 "mask": "0x1000", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "nvme_tcp": { 00:05:51.971 "mask": "0x2000", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "bdev_nvme": { 00:05:51.971 "mask": "0x4000", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 }, 00:05:51.971 "sock": { 00:05:51.971 "mask": "0x8000", 00:05:51.971 "tpoint_mask": "0x0" 00:05:51.971 } 00:05:51.971 }' 00:05:51.971 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:52.229 00:05:52.229 real 0m0.192s 00:05:52.229 user 0m0.151s 00:05:52.229 sys 0m0.032s 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.229 20:52:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.229 ************************************ 00:05:52.229 END TEST rpc_trace_cmd_test 00:05:52.229 ************************************ 00:05:52.230 20:52:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:52.230 20:52:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:52.230 20:52:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:52.230 20:52:43 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.230 20:52:43 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.230 20:52:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.230 ************************************ 00:05:52.230 START TEST rpc_daemon_integrity 00:05:52.230 ************************************ 00:05:52.230 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:52.230 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:52.230 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.230 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.230 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.230 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:52.230 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:52.488 { 00:05:52.488 "name": "Malloc2", 00:05:52.488 "aliases": [ 00:05:52.488 "bebc69f3-4835-448c-bee6-09027178a219" 00:05:52.488 ], 00:05:52.488 "product_name": "Malloc disk", 00:05:52.488 "block_size": 512, 00:05:52.488 "num_blocks": 16384, 00:05:52.488 "uuid": "bebc69f3-4835-448c-bee6-09027178a219", 00:05:52.488 "assigned_rate_limits": { 00:05:52.488 "rw_ios_per_sec": 0, 00:05:52.488 "rw_mbytes_per_sec": 0, 00:05:52.488 "r_mbytes_per_sec": 0, 00:05:52.488 "w_mbytes_per_sec": 0 00:05:52.488 }, 00:05:52.488 "claimed": false, 00:05:52.488 "zoned": false, 00:05:52.488 "supported_io_types": { 00:05:52.488 "read": true, 00:05:52.488 "write": true, 00:05:52.488 "unmap": true, 00:05:52.488 "write_zeroes": true, 00:05:52.488 "flush": true, 00:05:52.488 "reset": true, 00:05:52.488 "compare": false, 00:05:52.488 "compare_and_write": false, 00:05:52.488 "abort": true, 00:05:52.488 "nvme_admin": false, 00:05:52.488 "nvme_io": false 00:05:52.488 }, 00:05:52.488 "memory_domains": [ 00:05:52.488 { 00:05:52.488 "dma_device_id": "system", 00:05:52.488 "dma_device_type": 1 00:05:52.488 }, 00:05:52.488 { 00:05:52.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.488 "dma_device_type": 2 00:05:52.488 } 00:05:52.488 ], 00:05:52.488 "driver_specific": {} 00:05:52.488 } 00:05:52.488 ]' 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.488 [2024-07-13 20:52:43.204521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:52.488 [2024-07-13 20:52:43.204550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:52.488 [2024-07-13 20:52:43.204565] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20c1c70 00:05:52.488 [2024-07-13 20:52:43.204574] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:52.488 [2024-07-13 20:52:43.205482] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:52.488 [2024-07-13 20:52:43.205503] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:52.488 Passthru0 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.488 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:52.488 { 00:05:52.488 "name": "Malloc2", 00:05:52.488 "aliases": [ 00:05:52.488 "bebc69f3-4835-448c-bee6-09027178a219" 00:05:52.488 ], 00:05:52.488 "product_name": "Malloc disk", 00:05:52.488 "block_size": 512, 00:05:52.488 "num_blocks": 16384, 00:05:52.488 "uuid": "bebc69f3-4835-448c-bee6-09027178a219", 00:05:52.488 "assigned_rate_limits": { 00:05:52.488 "rw_ios_per_sec": 0, 00:05:52.488 "rw_mbytes_per_sec": 0, 00:05:52.488 "r_mbytes_per_sec": 0, 00:05:52.488 "w_mbytes_per_sec": 0 00:05:52.488 }, 00:05:52.488 "claimed": true, 00:05:52.488 "claim_type": "exclusive_write", 00:05:52.488 "zoned": false, 00:05:52.488 "supported_io_types": { 00:05:52.488 "read": true, 00:05:52.488 "write": true, 00:05:52.488 "unmap": true, 00:05:52.488 "write_zeroes": true, 00:05:52.488 "flush": true, 00:05:52.488 "reset": true, 00:05:52.488 "compare": false, 00:05:52.488 "compare_and_write": false, 00:05:52.488 "abort": true, 00:05:52.488 "nvme_admin": false, 00:05:52.488 "nvme_io": false 00:05:52.488 }, 00:05:52.488 "memory_domains": [ 00:05:52.488 { 00:05:52.488 "dma_device_id": "system", 00:05:52.488 "dma_device_type": 1 00:05:52.488 }, 00:05:52.489 { 00:05:52.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.489 "dma_device_type": 2 00:05:52.489 } 00:05:52.489 ], 00:05:52.489 "driver_specific": {} 00:05:52.489 }, 00:05:52.489 { 00:05:52.489 "name": "Passthru0", 00:05:52.489 "aliases": [ 00:05:52.489 "d811b764-d354-55f5-b3e6-458f067e4a92" 00:05:52.489 ], 00:05:52.489 "product_name": "passthru", 00:05:52.489 "block_size": 512, 00:05:52.489 "num_blocks": 16384, 00:05:52.489 "uuid": "d811b764-d354-55f5-b3e6-458f067e4a92", 00:05:52.489 "assigned_rate_limits": { 00:05:52.489 "rw_ios_per_sec": 0, 00:05:52.489 "rw_mbytes_per_sec": 0, 00:05:52.489 "r_mbytes_per_sec": 0, 00:05:52.489 "w_mbytes_per_sec": 0 00:05:52.489 }, 00:05:52.489 "claimed": false, 00:05:52.489 "zoned": false, 00:05:52.489 "supported_io_types": { 00:05:52.489 "read": true, 00:05:52.489 "write": true, 00:05:52.489 "unmap": true, 00:05:52.489 "write_zeroes": true, 00:05:52.489 "flush": true, 00:05:52.489 "reset": true, 00:05:52.489 "compare": false, 00:05:52.489 "compare_and_write": false, 00:05:52.489 "abort": true, 00:05:52.489 "nvme_admin": false, 00:05:52.489 "nvme_io": false 00:05:52.489 }, 00:05:52.489 "memory_domains": [ 00:05:52.489 { 00:05:52.489 "dma_device_id": "system", 00:05:52.489 "dma_device_type": 1 00:05:52.489 }, 00:05:52.489 { 00:05:52.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.489 "dma_device_type": 2 00:05:52.489 } 00:05:52.489 ], 00:05:52.489 "driver_specific": { 00:05:52.489 "passthru": { 00:05:52.489 "name": "Passthru0", 00:05:52.489 "base_bdev_name": "Malloc2" 00:05:52.489 } 00:05:52.489 } 00:05:52.489 } 00:05:52.489 ]' 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:52.489 00:05:52.489 real 0m0.271s 00:05:52.489 user 0m0.174s 00:05:52.489 sys 0m0.043s 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.489 20:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.489 ************************************ 00:05:52.489 END TEST rpc_daemon_integrity 00:05:52.489 ************************************ 00:05:52.747 20:52:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:52.747 20:52:43 rpc -- rpc/rpc.sh@84 -- # killprocess 3352532 00:05:52.747 20:52:43 rpc -- common/autotest_common.sh@946 -- # '[' -z 3352532 ']' 00:05:52.747 20:52:43 rpc -- common/autotest_common.sh@950 -- # kill -0 3352532 00:05:52.747 20:52:43 rpc -- common/autotest_common.sh@951 -- # uname 00:05:52.747 20:52:43 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:52.747 20:52:43 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3352532 00:05:52.747 20:52:43 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:52.747 20:52:43 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:52.747 20:52:43 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3352532' 00:05:52.748 killing process with pid 3352532 00:05:52.748 20:52:43 rpc -- common/autotest_common.sh@965 -- # kill 3352532 00:05:52.748 20:52:43 rpc -- common/autotest_common.sh@970 -- # wait 3352532 00:05:53.007 00:05:53.007 real 0m2.464s 00:05:53.007 user 0m3.076s 00:05:53.007 sys 0m0.817s 00:05:53.007 20:52:43 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.007 20:52:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.007 ************************************ 00:05:53.007 END TEST rpc 00:05:53.007 ************************************ 00:05:53.007 20:52:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:53.007 20:52:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.007 20:52:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.007 20:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:53.007 ************************************ 00:05:53.007 START TEST skip_rpc 00:05:53.007 ************************************ 00:05:53.007 20:52:43 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:53.266 * Looking for test storage... 00:05:53.266 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:53.266 20:52:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:53.266 20:52:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:53.266 20:52:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:53.266 20:52:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.266 20:52:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.266 20:52:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.266 ************************************ 00:05:53.266 START TEST skip_rpc 00:05:53.266 ************************************ 00:05:53.266 20:52:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:53.266 20:52:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3353001 00:05:53.266 20:52:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.266 20:52:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:53.266 20:52:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:53.266 [2024-07-13 20:52:44.012905] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:53.266 [2024-07-13 20:52:44.012950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353001 ] 00:05:53.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.266 [2024-07-13 20:52:44.082858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.266 [2024-07-13 20:52:44.121383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3353001 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3353001 ']' 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3353001 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:58.538 20:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3353001 00:05:58.538 20:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:58.538 20:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:58.538 20:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3353001' 00:05:58.538 killing process with pid 3353001 00:05:58.538 20:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3353001 00:05:58.538 20:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3353001 00:05:58.538 00:05:58.538 real 0m5.366s 00:05:58.538 user 0m5.114s 00:05:58.538 sys 0m0.297s 00:05:58.538 20:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.538 20:52:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.538 ************************************ 00:05:58.538 END TEST skip_rpc 00:05:58.538 ************************************ 00:05:58.538 20:52:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:58.538 20:52:49 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.538 20:52:49 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.538 20:52:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.538 ************************************ 00:05:58.538 START TEST skip_rpc_with_json 00:05:58.538 ************************************ 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3354074 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3354074 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3354074 ']' 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.538 20:52:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.797 [2024-07-13 20:52:49.443800] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:58.797 [2024-07-13 20:52:49.443844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354074 ] 00:05:58.797 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.797 [2024-07-13 20:52:49.511891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.797 [2024-07-13 20:52:49.550821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.364 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.364 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:59.364 20:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:59.364 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.364 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.622 [2024-07-13 20:52:50.256758] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:59.622 request: 00:05:59.622 { 00:05:59.622 "trtype": "tcp", 00:05:59.622 "method": "nvmf_get_transports", 00:05:59.622 "req_id": 1 00:05:59.622 } 00:05:59.622 Got JSON-RPC error response 00:05:59.622 response: 00:05:59.622 { 00:05:59.622 "code": -19, 00:05:59.622 "message": "No such device" 00:05:59.622 } 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.622 [2024-07-13 20:52:50.268862] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.622 20:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:59.622 { 00:05:59.622 "subsystems": [ 00:05:59.622 { 00:05:59.622 "subsystem": "keyring", 00:05:59.622 "config": [] 00:05:59.622 }, 00:05:59.622 { 00:05:59.622 "subsystem": "iobuf", 00:05:59.622 "config": [ 00:05:59.622 { 00:05:59.622 "method": "iobuf_set_options", 00:05:59.622 "params": { 00:05:59.622 "small_pool_count": 8192, 00:05:59.622 "large_pool_count": 1024, 00:05:59.622 "small_bufsize": 8192, 00:05:59.622 "large_bufsize": 135168 00:05:59.622 } 00:05:59.622 } 00:05:59.622 ] 00:05:59.622 }, 00:05:59.622 { 00:05:59.622 "subsystem": "sock", 00:05:59.622 "config": [ 00:05:59.622 { 00:05:59.622 "method": "sock_set_default_impl", 00:05:59.622 "params": { 00:05:59.622 "impl_name": "posix" 00:05:59.622 } 00:05:59.622 }, 00:05:59.622 { 00:05:59.622 "method": "sock_impl_set_options", 00:05:59.622 "params": { 00:05:59.622 "impl_name": "ssl", 00:05:59.622 "recv_buf_size": 4096, 00:05:59.622 "send_buf_size": 4096, 00:05:59.622 "enable_recv_pipe": true, 00:05:59.623 "enable_quickack": false, 00:05:59.623 "enable_placement_id": 0, 00:05:59.623 "enable_zerocopy_send_server": true, 00:05:59.623 "enable_zerocopy_send_client": false, 00:05:59.623 "zerocopy_threshold": 0, 00:05:59.623 "tls_version": 0, 00:05:59.623 "enable_ktls": false 00:05:59.623 } 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "method": "sock_impl_set_options", 00:05:59.623 "params": { 00:05:59.623 "impl_name": "posix", 00:05:59.623 "recv_buf_size": 2097152, 00:05:59.623 "send_buf_size": 2097152, 00:05:59.623 "enable_recv_pipe": true, 00:05:59.623 "enable_quickack": false, 00:05:59.623 "enable_placement_id": 0, 00:05:59.623 "enable_zerocopy_send_server": true, 00:05:59.623 "enable_zerocopy_send_client": false, 00:05:59.623 "zerocopy_threshold": 0, 00:05:59.623 "tls_version": 0, 00:05:59.623 "enable_ktls": false 00:05:59.623 } 00:05:59.623 } 00:05:59.623 ] 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "vmd", 00:05:59.623 "config": [] 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "accel", 00:05:59.623 "config": [ 00:05:59.623 { 00:05:59.623 "method": "accel_set_options", 00:05:59.623 "params": { 00:05:59.623 "small_cache_size": 128, 00:05:59.623 "large_cache_size": 16, 00:05:59.623 "task_count": 2048, 00:05:59.623 "sequence_count": 2048, 00:05:59.623 "buf_count": 2048 00:05:59.623 } 00:05:59.623 } 00:05:59.623 ] 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "bdev", 00:05:59.623 "config": [ 00:05:59.623 { 00:05:59.623 "method": "bdev_set_options", 00:05:59.623 "params": { 00:05:59.623 "bdev_io_pool_size": 65535, 00:05:59.623 "bdev_io_cache_size": 256, 00:05:59.623 "bdev_auto_examine": true, 00:05:59.623 "iobuf_small_cache_size": 128, 00:05:59.623 "iobuf_large_cache_size": 16 00:05:59.623 } 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "method": "bdev_raid_set_options", 00:05:59.623 "params": { 00:05:59.623 "process_window_size_kb": 1024 00:05:59.623 } 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "method": "bdev_iscsi_set_options", 00:05:59.623 "params": { 00:05:59.623 "timeout_sec": 30 00:05:59.623 } 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "method": "bdev_nvme_set_options", 00:05:59.623 "params": { 00:05:59.623 "action_on_timeout": "none", 00:05:59.623 "timeout_us": 0, 00:05:59.623 "timeout_admin_us": 0, 00:05:59.623 "keep_alive_timeout_ms": 10000, 00:05:59.623 "arbitration_burst": 0, 00:05:59.623 "low_priority_weight": 0, 00:05:59.623 "medium_priority_weight": 0, 00:05:59.623 "high_priority_weight": 0, 00:05:59.623 "nvme_adminq_poll_period_us": 10000, 00:05:59.623 "nvme_ioq_poll_period_us": 0, 00:05:59.623 "io_queue_requests": 0, 00:05:59.623 "delay_cmd_submit": true, 00:05:59.623 "transport_retry_count": 4, 00:05:59.623 "bdev_retry_count": 3, 00:05:59.623 "transport_ack_timeout": 0, 00:05:59.623 "ctrlr_loss_timeout_sec": 0, 00:05:59.623 "reconnect_delay_sec": 0, 00:05:59.623 "fast_io_fail_timeout_sec": 0, 00:05:59.623 "disable_auto_failback": false, 00:05:59.623 "generate_uuids": false, 00:05:59.623 "transport_tos": 0, 00:05:59.623 "nvme_error_stat": false, 00:05:59.623 "rdma_srq_size": 0, 00:05:59.623 "io_path_stat": false, 00:05:59.623 "allow_accel_sequence": false, 00:05:59.623 "rdma_max_cq_size": 0, 00:05:59.623 "rdma_cm_event_timeout_ms": 0, 00:05:59.623 "dhchap_digests": [ 00:05:59.623 "sha256", 00:05:59.623 "sha384", 00:05:59.623 "sha512" 00:05:59.623 ], 00:05:59.623 "dhchap_dhgroups": [ 00:05:59.623 "null", 00:05:59.623 "ffdhe2048", 00:05:59.623 "ffdhe3072", 00:05:59.623 "ffdhe4096", 00:05:59.623 "ffdhe6144", 00:05:59.623 "ffdhe8192" 00:05:59.623 ] 00:05:59.623 } 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "method": "bdev_nvme_set_hotplug", 00:05:59.623 "params": { 00:05:59.623 "period_us": 100000, 00:05:59.623 "enable": false 00:05:59.623 } 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "method": "bdev_wait_for_examine" 00:05:59.623 } 00:05:59.623 ] 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "scsi", 00:05:59.623 "config": null 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "scheduler", 00:05:59.623 "config": [ 00:05:59.623 { 00:05:59.623 "method": "framework_set_scheduler", 00:05:59.623 "params": { 00:05:59.623 "name": "static" 00:05:59.623 } 00:05:59.623 } 00:05:59.623 ] 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "vhost_scsi", 00:05:59.623 "config": [] 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "vhost_blk", 00:05:59.623 "config": [] 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "ublk", 00:05:59.623 "config": [] 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "nbd", 00:05:59.623 "config": [] 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "subsystem": "nvmf", 00:05:59.623 "config": [ 00:05:59.623 { 00:05:59.623 "method": "nvmf_set_config", 00:05:59.623 "params": { 00:05:59.623 "discovery_filter": "match_any", 00:05:59.623 "admin_cmd_passthru": { 00:05:59.623 "identify_ctrlr": false 00:05:59.623 } 00:05:59.623 } 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "method": "nvmf_set_max_subsystems", 00:05:59.623 "params": { 00:05:59.623 "max_subsystems": 1024 00:05:59.623 } 00:05:59.623 }, 00:05:59.623 { 00:05:59.623 "method": "nvmf_set_crdt", 00:05:59.623 "params": { 00:05:59.623 "crdt1": 0, 00:05:59.623 "crdt2": 0, 00:05:59.623 "crdt3": 0 00:05:59.624 } 00:05:59.624 }, 00:05:59.624 { 00:05:59.624 "method": "nvmf_create_transport", 00:05:59.624 "params": { 00:05:59.624 "trtype": "TCP", 00:05:59.624 "max_queue_depth": 128, 00:05:59.624 "max_io_qpairs_per_ctrlr": 127, 00:05:59.624 "in_capsule_data_size": 4096, 00:05:59.624 "max_io_size": 131072, 00:05:59.624 "io_unit_size": 131072, 00:05:59.624 "max_aq_depth": 128, 00:05:59.624 "num_shared_buffers": 511, 00:05:59.624 "buf_cache_size": 4294967295, 00:05:59.624 "dif_insert_or_strip": false, 00:05:59.624 "zcopy": false, 00:05:59.624 "c2h_success": true, 00:05:59.624 "sock_priority": 0, 00:05:59.624 "abort_timeout_sec": 1, 00:05:59.624 "ack_timeout": 0, 00:05:59.624 "data_wr_pool_size": 0 00:05:59.624 } 00:05:59.624 } 00:05:59.624 ] 00:05:59.624 }, 00:05:59.624 { 00:05:59.624 "subsystem": "iscsi", 00:05:59.624 "config": [ 00:05:59.624 { 00:05:59.624 "method": "iscsi_set_options", 00:05:59.624 "params": { 00:05:59.624 "node_base": "iqn.2016-06.io.spdk", 00:05:59.624 "max_sessions": 128, 00:05:59.624 "max_connections_per_session": 2, 00:05:59.624 "max_queue_depth": 64, 00:05:59.624 "default_time2wait": 2, 00:05:59.624 "default_time2retain": 20, 00:05:59.624 "first_burst_length": 8192, 00:05:59.624 "immediate_data": true, 00:05:59.624 "allow_duplicated_isid": false, 00:05:59.624 "error_recovery_level": 0, 00:05:59.624 "nop_timeout": 60, 00:05:59.624 "nop_in_interval": 30, 00:05:59.624 "disable_chap": false, 00:05:59.624 "require_chap": false, 00:05:59.624 "mutual_chap": false, 00:05:59.624 "chap_group": 0, 00:05:59.624 "max_large_datain_per_connection": 64, 00:05:59.624 "max_r2t_per_connection": 4, 00:05:59.624 "pdu_pool_size": 36864, 00:05:59.624 "immediate_data_pool_size": 16384, 00:05:59.624 "data_out_pool_size": 2048 00:05:59.624 } 00:05:59.624 } 00:05:59.624 ] 00:05:59.624 } 00:05:59.624 ] 00:05:59.624 } 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3354074 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3354074 ']' 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3354074 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3354074 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3354074' 00:05:59.624 killing process with pid 3354074 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3354074 00:05:59.624 20:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3354074 00:06:00.191 20:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3354350 00:06:00.191 20:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:00.191 20:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3354350 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3354350 ']' 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3354350 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3354350 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3354350' 00:06:05.476 killing process with pid 3354350 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3354350 00:06:05.476 20:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3354350 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:05.476 00:06:05.476 real 0m6.757s 00:06:05.476 user 0m6.580s 00:06:05.476 sys 0m0.651s 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.476 ************************************ 00:06:05.476 END TEST skip_rpc_with_json 00:06:05.476 ************************************ 00:06:05.476 20:52:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:05.476 20:52:56 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.476 20:52:56 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.476 20:52:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.476 ************************************ 00:06:05.476 START TEST skip_rpc_with_delay 00:06:05.476 ************************************ 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:05.476 [2024-07-13 20:52:56.251315] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:05.476 [2024-07-13 20:52:56.251378] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.476 00:06:05.476 real 0m0.053s 00:06:05.476 user 0m0.030s 00:06:05.476 sys 0m0.023s 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.476 20:52:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:05.476 ************************************ 00:06:05.476 END TEST skip_rpc_with_delay 00:06:05.476 ************************************ 00:06:05.476 20:52:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:05.476 20:52:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:05.476 20:52:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:05.476 20:52:56 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.476 20:52:56 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.476 20:52:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.476 ************************************ 00:06:05.476 START TEST exit_on_failed_rpc_init 00:06:05.476 ************************************ 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3355218 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3355218 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3355218 ']' 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.476 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.736 [2024-07-13 20:52:56.394164] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:05.736 [2024-07-13 20:52:56.394216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3355218 ] 00:06:05.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.736 [2024-07-13 20:52:56.464713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.736 [2024-07-13 20:52:56.504044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.995 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.995 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:05.995 20:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.995 20:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:05.996 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.996 [2024-07-13 20:52:56.735801] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:05.996 [2024-07-13 20:52:56.735855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3355391 ] 00:06:05.996 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.996 [2024-07-13 20:52:56.806604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.996 [2024-07-13 20:52:56.845238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.996 [2024-07-13 20:52:56.845312] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:05.996 [2024-07-13 20:52:56.845323] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:05.996 [2024-07-13 20:52:56.845331] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3355218 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3355218 ']' 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3355218 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3355218 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.255 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.256 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3355218' 00:06:06.256 killing process with pid 3355218 00:06:06.256 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3355218 00:06:06.256 20:52:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3355218 00:06:06.515 00:06:06.515 real 0m0.916s 00:06:06.515 user 0m0.947s 00:06:06.515 sys 0m0.414s 00:06:06.515 20:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.515 20:52:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.515 ************************************ 00:06:06.515 END TEST exit_on_failed_rpc_init 00:06:06.515 ************************************ 00:06:06.515 20:52:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:06.515 00:06:06.515 real 0m13.486s 00:06:06.515 user 0m12.815s 00:06:06.515 sys 0m1.664s 00:06:06.515 20:52:57 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.515 20:52:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.515 ************************************ 00:06:06.515 END TEST skip_rpc 00:06:06.516 ************************************ 00:06:06.516 20:52:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:06.516 20:52:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.516 20:52:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.516 20:52:57 -- common/autotest_common.sh@10 -- # set +x 00:06:06.516 ************************************ 00:06:06.516 START TEST rpc_client 00:06:06.516 ************************************ 00:06:06.516 20:52:57 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:06.775 * Looking for test storage... 00:06:06.775 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:06.775 20:52:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:06.775 OK 00:06:06.775 20:52:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:06.775 00:06:06.775 real 0m0.137s 00:06:06.775 user 0m0.064s 00:06:06.775 sys 0m0.084s 00:06:06.775 20:52:57 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.775 20:52:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:06.775 ************************************ 00:06:06.775 END TEST rpc_client 00:06:06.775 ************************************ 00:06:06.775 20:52:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:06.775 20:52:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.775 20:52:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.775 20:52:57 -- common/autotest_common.sh@10 -- # set +x 00:06:06.775 ************************************ 00:06:06.775 START TEST json_config 00:06:06.775 ************************************ 00:06:06.775 20:52:57 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:07.035 20:52:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:07.035 20:52:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.035 20:52:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.035 20:52:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.035 20:52:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.035 20:52:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.035 20:52:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.035 20:52:57 json_config -- paths/export.sh@5 -- # export PATH 00:06:07.035 20:52:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@47 -- # : 0 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:07.035 20:52:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:07.035 20:52:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:07.035 20:52:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:07.035 20:52:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:07.035 20:52:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:07.035 20:52:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:07.035 20:52:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:07.035 20:52:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:07.035 20:52:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:07.036 INFO: JSON configuration test init 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.036 20:52:57 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:07.036 20:52:57 json_config -- json_config/common.sh@9 -- # local app=target 00:06:07.036 20:52:57 json_config -- json_config/common.sh@10 -- # shift 00:06:07.036 20:52:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:07.036 20:52:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:07.036 20:52:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:07.036 20:52:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.036 20:52:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.036 20:52:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3355598 00:06:07.036 20:52:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:07.036 Waiting for target to run... 00:06:07.036 20:52:57 json_config -- json_config/common.sh@25 -- # waitforlisten 3355598 /var/tmp/spdk_tgt.sock 00:06:07.036 20:52:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@827 -- # '[' -z 3355598 ']' 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:07.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.036 20:52:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.036 [2024-07-13 20:52:57.777114] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:07.036 [2024-07-13 20:52:57.777164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3355598 ] 00:06:07.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.604 [2024-07-13 20:52:58.216905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.604 [2024-07-13 20:52:58.245389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.863 20:52:58 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.863 20:52:58 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:07.863 20:52:58 json_config -- json_config/common.sh@26 -- # echo '' 00:06:07.863 00:06:07.863 20:52:58 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:07.863 20:52:58 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:07.863 20:52:58 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:07.863 20:52:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.863 20:52:58 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:07.863 20:52:58 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:07.863 20:52:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.863 20:52:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.863 20:52:58 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:07.863 20:52:58 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:07.863 20:52:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:11.154 20:53:01 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:11.154 20:53:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:11.154 20:53:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:11.154 20:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.154 20:53:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:11.154 20:53:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:11.154 20:53:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:11.154 20:53:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:11.154 20:53:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:11.154 20:53:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:11.154 20:53:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:11.154 20:53:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:11.155 20:53:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.155 20:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:11.155 20:53:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:11.155 20:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:06:11.155 20:53:01 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:06:11.155 20:53:01 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:11.155 20:53:01 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.155 20:53:01 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:11.155 20:53:01 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:11.155 20:53:01 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:11.155 20:53:01 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.155 20:53:01 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:06:11.155 20:53:01 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.155 20:53:01 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:06:11.155 20:53:01 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:11.155 20:53:01 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:06:11.155 20:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@296 -- # e810=() 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@297 -- # x722=() 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@298 -- # mlx=() 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:17.765 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:17.765 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:17.765 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:17.765 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@58 -- # uname 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:17.765 20:53:08 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:18.024 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:18.024 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:18.024 altname enp217s0f0np0 00:06:18.024 altname ens818f0np0 00:06:18.024 inet 192.168.100.8/24 scope global mlx_0_0 00:06:18.024 valid_lft forever preferred_lft forever 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:18.024 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:18.024 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:18.024 altname enp217s0f1np1 00:06:18.024 altname ens818f1np1 00:06:18.024 inet 192.168.100.9/24 scope global mlx_0_1 00:06:18.024 valid_lft forever preferred_lft forever 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@422 -- # return 0 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:18.024 20:53:08 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:18.025 192.168.100.9' 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:18.025 192.168.100.9' 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@457 -- # head -n 1 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:18.025 192.168.100.9' 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@458 -- # head -n 1 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:18.025 20:53:08 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:18.025 20:53:08 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:06:18.025 20:53:08 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:18.025 20:53:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:18.283 MallocForNvmf0 00:06:18.283 20:53:08 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.283 20:53:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.283 MallocForNvmf1 00:06:18.283 20:53:09 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:18.284 20:53:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:18.542 [2024-07-13 20:53:09.290991] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:18.542 [2024-07-13 20:53:09.323055] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16cbec0/0x16d9380) succeed. 00:06:18.542 [2024-07-13 20:53:09.335267] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16ce0b0/0x17593c0) succeed. 00:06:18.542 20:53:09 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.542 20:53:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.800 20:53:09 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.800 20:53:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:19.058 20:53:09 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:19.058 20:53:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:19.058 20:53:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:19.058 20:53:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:19.316 [2024-07-13 20:53:10.020464] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:19.316 20:53:10 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:19.316 20:53:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.316 20:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.316 20:53:10 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:19.316 20:53:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.316 20:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.316 20:53:10 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:19.316 20:53:10 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.316 20:53:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.575 MallocBdevForConfigChangeCheck 00:06:19.575 20:53:10 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:19.575 20:53:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.575 20:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.575 20:53:10 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:19.575 20:53:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.834 20:53:10 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:19.834 INFO: shutting down applications... 00:06:19.834 20:53:10 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:19.834 20:53:10 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:19.834 20:53:10 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:19.834 20:53:10 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:22.367 Calling clear_iscsi_subsystem 00:06:22.367 Calling clear_nvmf_subsystem 00:06:22.367 Calling clear_nbd_subsystem 00:06:22.367 Calling clear_ublk_subsystem 00:06:22.367 Calling clear_vhost_blk_subsystem 00:06:22.367 Calling clear_vhost_scsi_subsystem 00:06:22.367 Calling clear_bdev_subsystem 00:06:22.367 20:53:13 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:22.367 20:53:13 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:22.367 20:53:13 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:22.367 20:53:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.367 20:53:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:22.367 20:53:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:22.625 20:53:13 json_config -- json_config/json_config.sh@345 -- # break 00:06:22.625 20:53:13 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:22.625 20:53:13 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:22.625 20:53:13 json_config -- json_config/common.sh@31 -- # local app=target 00:06:22.625 20:53:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:22.625 20:53:13 json_config -- json_config/common.sh@35 -- # [[ -n 3355598 ]] 00:06:22.625 20:53:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3355598 00:06:22.625 20:53:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:22.625 20:53:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.625 20:53:13 json_config -- json_config/common.sh@41 -- # kill -0 3355598 00:06:22.625 20:53:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:23.192 20:53:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:23.192 20:53:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.192 20:53:14 json_config -- json_config/common.sh@41 -- # kill -0 3355598 00:06:23.192 20:53:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:23.192 20:53:14 json_config -- json_config/common.sh@43 -- # break 00:06:23.192 20:53:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:23.192 20:53:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:23.192 SPDK target shutdown done 00:06:23.192 20:53:14 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:23.192 INFO: relaunching applications... 00:06:23.192 20:53:14 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.192 20:53:14 json_config -- json_config/common.sh@9 -- # local app=target 00:06:23.192 20:53:14 json_config -- json_config/common.sh@10 -- # shift 00:06:23.192 20:53:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.192 20:53:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.192 20:53:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.192 20:53:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.192 20:53:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.192 20:53:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3360684 00:06:23.192 20:53:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.192 Waiting for target to run... 00:06:23.192 20:53:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.192 20:53:14 json_config -- json_config/common.sh@25 -- # waitforlisten 3360684 /var/tmp/spdk_tgt.sock 00:06:23.192 20:53:14 json_config -- common/autotest_common.sh@827 -- # '[' -z 3360684 ']' 00:06:23.192 20:53:14 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.192 20:53:14 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.192 20:53:14 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.192 20:53:14 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.192 20:53:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.192 [2024-07-13 20:53:14.059106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:23.192 [2024-07-13 20:53:14.059168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360684 ] 00:06:23.450 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.708 [2024-07-13 20:53:14.347348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.708 [2024-07-13 20:53:14.368906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.998 [2024-07-13 20:53:17.400832] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2346d40/0x2353bc0) succeed. 00:06:26.998 [2024-07-13 20:53:17.411499] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2348f30/0x23d3c00) succeed. 00:06:26.998 [2024-07-13 20:53:17.461361] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:26.998 20:53:17 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.998 20:53:17 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:26.998 20:53:17 json_config -- json_config/common.sh@26 -- # echo '' 00:06:26.998 00:06:26.998 20:53:17 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:26.998 20:53:17 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:26.998 INFO: Checking if target configuration is the same... 00:06:26.998 20:53:17 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.998 20:53:17 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:26.998 20:53:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.998 + '[' 2 -ne 2 ']' 00:06:26.998 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:26.998 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:26.998 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:26.998 +++ basename /dev/fd/62 00:06:26.998 ++ mktemp /tmp/62.XXX 00:06:26.998 + tmp_file_1=/tmp/62.ED6 00:06:26.998 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.998 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:26.998 + tmp_file_2=/tmp/spdk_tgt_config.json.i2a 00:06:26.998 + ret=0 00:06:26.998 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.998 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:26.998 + diff -u /tmp/62.ED6 /tmp/spdk_tgt_config.json.i2a 00:06:26.998 + echo 'INFO: JSON config files are the same' 00:06:26.998 INFO: JSON config files are the same 00:06:26.998 + rm /tmp/62.ED6 /tmp/spdk_tgt_config.json.i2a 00:06:26.998 + exit 0 00:06:26.998 20:53:17 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:26.998 20:53:17 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:26.998 INFO: changing configuration and checking if this can be detected... 00:06:26.998 20:53:17 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:26.998 20:53:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:27.258 20:53:18 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.258 20:53:18 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:27.258 20:53:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.258 + '[' 2 -ne 2 ']' 00:06:27.258 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:27.258 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:27.258 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:27.258 +++ basename /dev/fd/62 00:06:27.258 ++ mktemp /tmp/62.XXX 00:06:27.258 + tmp_file_1=/tmp/62.tFP 00:06:27.258 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.258 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:27.258 + tmp_file_2=/tmp/spdk_tgt_config.json.DZi 00:06:27.258 + ret=0 00:06:27.258 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.517 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:27.517 + diff -u /tmp/62.tFP /tmp/spdk_tgt_config.json.DZi 00:06:27.517 + ret=1 00:06:27.517 + echo '=== Start of file: /tmp/62.tFP ===' 00:06:27.517 + cat /tmp/62.tFP 00:06:27.517 + echo '=== End of file: /tmp/62.tFP ===' 00:06:27.517 + echo '' 00:06:27.517 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DZi ===' 00:06:27.517 + cat /tmp/spdk_tgt_config.json.DZi 00:06:27.517 + echo '=== End of file: /tmp/spdk_tgt_config.json.DZi ===' 00:06:27.517 + echo '' 00:06:27.517 + rm /tmp/62.tFP /tmp/spdk_tgt_config.json.DZi 00:06:27.517 + exit 1 00:06:27.518 20:53:18 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:27.518 INFO: configuration change detected. 00:06:27.518 20:53:18 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:27.518 20:53:18 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:27.518 20:53:18 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:27.518 20:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.518 20:53:18 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:27.518 20:53:18 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:27.518 20:53:18 json_config -- json_config/json_config.sh@317 -- # [[ -n 3360684 ]] 00:06:27.518 20:53:18 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:27.518 20:53:18 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:27.518 20:53:18 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:27.518 20:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.777 20:53:18 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:27.777 20:53:18 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:27.777 20:53:18 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:27.777 20:53:18 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:27.777 20:53:18 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:27.777 20:53:18 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.777 20:53:18 json_config -- json_config/json_config.sh@323 -- # killprocess 3360684 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@946 -- # '[' -z 3360684 ']' 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@950 -- # kill -0 3360684 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@951 -- # uname 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3360684 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3360684' 00:06:27.777 killing process with pid 3360684 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@965 -- # kill 3360684 00:06:27.777 20:53:18 json_config -- common/autotest_common.sh@970 -- # wait 3360684 00:06:30.314 20:53:21 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.314 20:53:21 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:30.314 20:53:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.314 20:53:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.314 20:53:21 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:30.314 20:53:21 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:30.314 INFO: Success 00:06:30.314 20:53:21 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:30.314 20:53:21 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:30.314 20:53:21 json_config -- nvmf/common.sh@117 -- # sync 00:06:30.314 20:53:21 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:30.314 20:53:21 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:30.314 20:53:21 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:30.315 20:53:21 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:30.315 20:53:21 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:30.315 00:06:30.315 real 0m23.466s 00:06:30.315 user 0m25.680s 00:06:30.315 sys 0m7.479s 00:06:30.315 20:53:21 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.315 20:53:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.315 ************************************ 00:06:30.315 END TEST json_config 00:06:30.315 ************************************ 00:06:30.315 20:53:21 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:30.315 20:53:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.315 20:53:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.315 20:53:21 -- common/autotest_common.sh@10 -- # set +x 00:06:30.315 ************************************ 00:06:30.315 START TEST json_config_extra_key 00:06:30.315 ************************************ 00:06:30.315 20:53:21 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:30.575 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:30.575 20:53:21 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.575 20:53:21 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.575 20:53:21 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.575 20:53:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.575 20:53:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.575 20:53:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.575 20:53:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:30.575 20:53:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.575 20:53:21 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:30.576 INFO: launching applications... 00:06:30.576 20:53:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3362066 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.576 Waiting for target to run... 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3362066 /var/tmp/spdk_tgt.sock 00:06:30.576 20:53:21 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3362066 ']' 00:06:30.576 20:53:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:30.576 20:53:21 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.576 20:53:21 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.576 20:53:21 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.576 20:53:21 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.576 20:53:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.576 [2024-07-13 20:53:21.306665] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:30.576 [2024-07-13 20:53:21.306720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3362066 ] 00:06:30.576 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.145 [2024-07-13 20:53:21.740315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.145 [2024-07-13 20:53:21.770646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.404 20:53:22 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.404 20:53:22 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:31.404 20:53:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:31.404 00:06:31.404 20:53:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:31.404 INFO: shutting down applications... 00:06:31.404 20:53:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:31.404 20:53:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:31.404 20:53:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:31.404 20:53:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3362066 ]] 00:06:31.404 20:53:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3362066 00:06:31.404 20:53:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:31.404 20:53:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:31.404 20:53:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3362066 00:06:31.404 20:53:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:31.972 20:53:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:31.972 20:53:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:31.972 20:53:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3362066 00:06:31.972 20:53:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:31.972 20:53:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:31.972 20:53:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:31.972 20:53:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:31.972 SPDK target shutdown done 00:06:31.972 20:53:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:31.972 Success 00:06:31.972 00:06:31.972 real 0m1.456s 00:06:31.972 user 0m1.021s 00:06:31.972 sys 0m0.568s 00:06:31.972 20:53:22 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.972 20:53:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:31.972 ************************************ 00:06:31.972 END TEST json_config_extra_key 00:06:31.972 ************************************ 00:06:31.972 20:53:22 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:31.972 20:53:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.972 20:53:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.972 20:53:22 -- common/autotest_common.sh@10 -- # set +x 00:06:31.972 ************************************ 00:06:31.972 START TEST alias_rpc 00:06:31.972 ************************************ 00:06:31.972 20:53:22 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:31.972 * Looking for test storage... 00:06:31.972 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:31.972 20:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.972 20:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:31.972 20:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3362400 00:06:31.972 20:53:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3362400 00:06:31.972 20:53:22 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3362400 ']' 00:06:31.972 20:53:22 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.972 20:53:22 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.972 20:53:22 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.972 20:53:22 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.972 20:53:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.972 [2024-07-13 20:53:22.808469] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:31.973 [2024-07-13 20:53:22.808529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3362400 ] 00:06:31.973 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.231 [2024-07-13 20:53:22.877914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.231 [2024-07-13 20:53:22.917170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.231 20:53:23 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.231 20:53:23 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:32.231 20:53:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:32.489 20:53:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3362400 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3362400 ']' 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3362400 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3362400 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3362400' 00:06:32.489 killing process with pid 3362400 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@965 -- # kill 3362400 00:06:32.489 20:53:23 alias_rpc -- common/autotest_common.sh@970 -- # wait 3362400 00:06:32.749 00:06:32.749 real 0m0.949s 00:06:32.749 user 0m0.915s 00:06:32.749 sys 0m0.395s 00:06:32.749 20:53:23 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.749 20:53:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.749 ************************************ 00:06:32.749 END TEST alias_rpc 00:06:32.749 ************************************ 00:06:33.008 20:53:23 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:33.008 20:53:23 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:33.008 20:53:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.008 20:53:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.009 20:53:23 -- common/autotest_common.sh@10 -- # set +x 00:06:33.009 ************************************ 00:06:33.009 START TEST spdkcli_tcp 00:06:33.009 ************************************ 00:06:33.009 20:53:23 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:33.009 * Looking for test storage... 00:06:33.009 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:33.009 20:53:23 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:33.009 20:53:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3362511 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3362511 00:06:33.009 20:53:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:33.009 20:53:23 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3362511 ']' 00:06:33.009 20:53:23 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.009 20:53:23 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.009 20:53:23 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.009 20:53:23 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.009 20:53:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.009 [2024-07-13 20:53:23.846834] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:33.009 [2024-07-13 20:53:23.846886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3362511 ] 00:06:33.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.268 [2024-07-13 20:53:23.919465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.268 [2024-07-13 20:53:23.958888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.268 [2024-07-13 20:53:23.958891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.838 20:53:24 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.838 20:53:24 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:33.838 20:53:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:33.838 20:53:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3362764 00:06:33.838 20:53:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:34.098 [ 00:06:34.098 "bdev_malloc_delete", 00:06:34.098 "bdev_malloc_create", 00:06:34.098 "bdev_null_resize", 00:06:34.098 "bdev_null_delete", 00:06:34.098 "bdev_null_create", 00:06:34.098 "bdev_nvme_cuse_unregister", 00:06:34.098 "bdev_nvme_cuse_register", 00:06:34.098 "bdev_opal_new_user", 00:06:34.098 "bdev_opal_set_lock_state", 00:06:34.098 "bdev_opal_delete", 00:06:34.098 "bdev_opal_get_info", 00:06:34.098 "bdev_opal_create", 00:06:34.098 "bdev_nvme_opal_revert", 00:06:34.098 "bdev_nvme_opal_init", 00:06:34.098 "bdev_nvme_send_cmd", 00:06:34.098 "bdev_nvme_get_path_iostat", 00:06:34.098 "bdev_nvme_get_mdns_discovery_info", 00:06:34.098 "bdev_nvme_stop_mdns_discovery", 00:06:34.098 "bdev_nvme_start_mdns_discovery", 00:06:34.098 "bdev_nvme_set_multipath_policy", 00:06:34.098 "bdev_nvme_set_preferred_path", 00:06:34.098 "bdev_nvme_get_io_paths", 00:06:34.098 "bdev_nvme_remove_error_injection", 00:06:34.098 "bdev_nvme_add_error_injection", 00:06:34.098 "bdev_nvme_get_discovery_info", 00:06:34.098 "bdev_nvme_stop_discovery", 00:06:34.098 "bdev_nvme_start_discovery", 00:06:34.098 "bdev_nvme_get_controller_health_info", 00:06:34.098 "bdev_nvme_disable_controller", 00:06:34.098 "bdev_nvme_enable_controller", 00:06:34.098 "bdev_nvme_reset_controller", 00:06:34.098 "bdev_nvme_get_transport_statistics", 00:06:34.098 "bdev_nvme_apply_firmware", 00:06:34.098 "bdev_nvme_detach_controller", 00:06:34.098 "bdev_nvme_get_controllers", 00:06:34.098 "bdev_nvme_attach_controller", 00:06:34.098 "bdev_nvme_set_hotplug", 00:06:34.098 "bdev_nvme_set_options", 00:06:34.098 "bdev_passthru_delete", 00:06:34.098 "bdev_passthru_create", 00:06:34.098 "bdev_lvol_set_parent_bdev", 00:06:34.098 "bdev_lvol_set_parent", 00:06:34.098 "bdev_lvol_check_shallow_copy", 00:06:34.098 "bdev_lvol_start_shallow_copy", 00:06:34.098 "bdev_lvol_grow_lvstore", 00:06:34.098 "bdev_lvol_get_lvols", 00:06:34.098 "bdev_lvol_get_lvstores", 00:06:34.098 "bdev_lvol_delete", 00:06:34.098 "bdev_lvol_set_read_only", 00:06:34.098 "bdev_lvol_resize", 00:06:34.098 "bdev_lvol_decouple_parent", 00:06:34.098 "bdev_lvol_inflate", 00:06:34.098 "bdev_lvol_rename", 00:06:34.098 "bdev_lvol_clone_bdev", 00:06:34.098 "bdev_lvol_clone", 00:06:34.098 "bdev_lvol_snapshot", 00:06:34.098 "bdev_lvol_create", 00:06:34.098 "bdev_lvol_delete_lvstore", 00:06:34.098 "bdev_lvol_rename_lvstore", 00:06:34.098 "bdev_lvol_create_lvstore", 00:06:34.098 "bdev_raid_set_options", 00:06:34.098 "bdev_raid_remove_base_bdev", 00:06:34.098 "bdev_raid_add_base_bdev", 00:06:34.098 "bdev_raid_delete", 00:06:34.098 "bdev_raid_create", 00:06:34.098 "bdev_raid_get_bdevs", 00:06:34.098 "bdev_error_inject_error", 00:06:34.098 "bdev_error_delete", 00:06:34.098 "bdev_error_create", 00:06:34.098 "bdev_split_delete", 00:06:34.098 "bdev_split_create", 00:06:34.098 "bdev_delay_delete", 00:06:34.098 "bdev_delay_create", 00:06:34.098 "bdev_delay_update_latency", 00:06:34.098 "bdev_zone_block_delete", 00:06:34.098 "bdev_zone_block_create", 00:06:34.098 "blobfs_create", 00:06:34.098 "blobfs_detect", 00:06:34.098 "blobfs_set_cache_size", 00:06:34.098 "bdev_aio_delete", 00:06:34.098 "bdev_aio_rescan", 00:06:34.098 "bdev_aio_create", 00:06:34.098 "bdev_ftl_set_property", 00:06:34.098 "bdev_ftl_get_properties", 00:06:34.098 "bdev_ftl_get_stats", 00:06:34.098 "bdev_ftl_unmap", 00:06:34.098 "bdev_ftl_unload", 00:06:34.098 "bdev_ftl_delete", 00:06:34.098 "bdev_ftl_load", 00:06:34.098 "bdev_ftl_create", 00:06:34.098 "bdev_virtio_attach_controller", 00:06:34.098 "bdev_virtio_scsi_get_devices", 00:06:34.098 "bdev_virtio_detach_controller", 00:06:34.098 "bdev_virtio_blk_set_hotplug", 00:06:34.098 "bdev_iscsi_delete", 00:06:34.098 "bdev_iscsi_create", 00:06:34.098 "bdev_iscsi_set_options", 00:06:34.098 "accel_error_inject_error", 00:06:34.098 "ioat_scan_accel_module", 00:06:34.098 "dsa_scan_accel_module", 00:06:34.098 "iaa_scan_accel_module", 00:06:34.098 "keyring_file_remove_key", 00:06:34.098 "keyring_file_add_key", 00:06:34.098 "keyring_linux_set_options", 00:06:34.098 "iscsi_get_histogram", 00:06:34.098 "iscsi_enable_histogram", 00:06:34.098 "iscsi_set_options", 00:06:34.098 "iscsi_get_auth_groups", 00:06:34.098 "iscsi_auth_group_remove_secret", 00:06:34.098 "iscsi_auth_group_add_secret", 00:06:34.098 "iscsi_delete_auth_group", 00:06:34.098 "iscsi_create_auth_group", 00:06:34.098 "iscsi_set_discovery_auth", 00:06:34.098 "iscsi_get_options", 00:06:34.098 "iscsi_target_node_request_logout", 00:06:34.098 "iscsi_target_node_set_redirect", 00:06:34.098 "iscsi_target_node_set_auth", 00:06:34.098 "iscsi_target_node_add_lun", 00:06:34.098 "iscsi_get_stats", 00:06:34.098 "iscsi_get_connections", 00:06:34.098 "iscsi_portal_group_set_auth", 00:06:34.099 "iscsi_start_portal_group", 00:06:34.099 "iscsi_delete_portal_group", 00:06:34.099 "iscsi_create_portal_group", 00:06:34.099 "iscsi_get_portal_groups", 00:06:34.099 "iscsi_delete_target_node", 00:06:34.099 "iscsi_target_node_remove_pg_ig_maps", 00:06:34.099 "iscsi_target_node_add_pg_ig_maps", 00:06:34.099 "iscsi_create_target_node", 00:06:34.099 "iscsi_get_target_nodes", 00:06:34.099 "iscsi_delete_initiator_group", 00:06:34.099 "iscsi_initiator_group_remove_initiators", 00:06:34.099 "iscsi_initiator_group_add_initiators", 00:06:34.099 "iscsi_create_initiator_group", 00:06:34.099 "iscsi_get_initiator_groups", 00:06:34.099 "nvmf_set_crdt", 00:06:34.099 "nvmf_set_config", 00:06:34.099 "nvmf_set_max_subsystems", 00:06:34.099 "nvmf_stop_mdns_prr", 00:06:34.099 "nvmf_publish_mdns_prr", 00:06:34.099 "nvmf_subsystem_get_listeners", 00:06:34.099 "nvmf_subsystem_get_qpairs", 00:06:34.099 "nvmf_subsystem_get_controllers", 00:06:34.099 "nvmf_get_stats", 00:06:34.099 "nvmf_get_transports", 00:06:34.099 "nvmf_create_transport", 00:06:34.099 "nvmf_get_targets", 00:06:34.099 "nvmf_delete_target", 00:06:34.099 "nvmf_create_target", 00:06:34.099 "nvmf_subsystem_allow_any_host", 00:06:34.099 "nvmf_subsystem_remove_host", 00:06:34.099 "nvmf_subsystem_add_host", 00:06:34.099 "nvmf_ns_remove_host", 00:06:34.099 "nvmf_ns_add_host", 00:06:34.099 "nvmf_subsystem_remove_ns", 00:06:34.099 "nvmf_subsystem_add_ns", 00:06:34.099 "nvmf_subsystem_listener_set_ana_state", 00:06:34.099 "nvmf_discovery_get_referrals", 00:06:34.099 "nvmf_discovery_remove_referral", 00:06:34.099 "nvmf_discovery_add_referral", 00:06:34.099 "nvmf_subsystem_remove_listener", 00:06:34.099 "nvmf_subsystem_add_listener", 00:06:34.099 "nvmf_delete_subsystem", 00:06:34.099 "nvmf_create_subsystem", 00:06:34.099 "nvmf_get_subsystems", 00:06:34.099 "env_dpdk_get_mem_stats", 00:06:34.099 "nbd_get_disks", 00:06:34.099 "nbd_stop_disk", 00:06:34.099 "nbd_start_disk", 00:06:34.099 "ublk_recover_disk", 00:06:34.099 "ublk_get_disks", 00:06:34.099 "ublk_stop_disk", 00:06:34.099 "ublk_start_disk", 00:06:34.099 "ublk_destroy_target", 00:06:34.099 "ublk_create_target", 00:06:34.099 "virtio_blk_create_transport", 00:06:34.099 "virtio_blk_get_transports", 00:06:34.099 "vhost_controller_set_coalescing", 00:06:34.099 "vhost_get_controllers", 00:06:34.099 "vhost_delete_controller", 00:06:34.099 "vhost_create_blk_controller", 00:06:34.099 "vhost_scsi_controller_remove_target", 00:06:34.099 "vhost_scsi_controller_add_target", 00:06:34.099 "vhost_start_scsi_controller", 00:06:34.099 "vhost_create_scsi_controller", 00:06:34.099 "thread_set_cpumask", 00:06:34.099 "framework_get_scheduler", 00:06:34.099 "framework_set_scheduler", 00:06:34.099 "framework_get_reactors", 00:06:34.099 "thread_get_io_channels", 00:06:34.099 "thread_get_pollers", 00:06:34.099 "thread_get_stats", 00:06:34.099 "framework_monitor_context_switch", 00:06:34.099 "spdk_kill_instance", 00:06:34.099 "log_enable_timestamps", 00:06:34.099 "log_get_flags", 00:06:34.099 "log_clear_flag", 00:06:34.099 "log_set_flag", 00:06:34.099 "log_get_level", 00:06:34.099 "log_set_level", 00:06:34.099 "log_get_print_level", 00:06:34.099 "log_set_print_level", 00:06:34.099 "framework_enable_cpumask_locks", 00:06:34.099 "framework_disable_cpumask_locks", 00:06:34.099 "framework_wait_init", 00:06:34.099 "framework_start_init", 00:06:34.099 "scsi_get_devices", 00:06:34.099 "bdev_get_histogram", 00:06:34.099 "bdev_enable_histogram", 00:06:34.099 "bdev_set_qos_limit", 00:06:34.099 "bdev_set_qd_sampling_period", 00:06:34.099 "bdev_get_bdevs", 00:06:34.099 "bdev_reset_iostat", 00:06:34.099 "bdev_get_iostat", 00:06:34.099 "bdev_examine", 00:06:34.099 "bdev_wait_for_examine", 00:06:34.099 "bdev_set_options", 00:06:34.099 "notify_get_notifications", 00:06:34.099 "notify_get_types", 00:06:34.099 "accel_get_stats", 00:06:34.099 "accel_set_options", 00:06:34.099 "accel_set_driver", 00:06:34.099 "accel_crypto_key_destroy", 00:06:34.099 "accel_crypto_keys_get", 00:06:34.099 "accel_crypto_key_create", 00:06:34.099 "accel_assign_opc", 00:06:34.099 "accel_get_module_info", 00:06:34.099 "accel_get_opc_assignments", 00:06:34.099 "vmd_rescan", 00:06:34.099 "vmd_remove_device", 00:06:34.099 "vmd_enable", 00:06:34.099 "sock_get_default_impl", 00:06:34.099 "sock_set_default_impl", 00:06:34.099 "sock_impl_set_options", 00:06:34.099 "sock_impl_get_options", 00:06:34.099 "iobuf_get_stats", 00:06:34.099 "iobuf_set_options", 00:06:34.099 "framework_get_pci_devices", 00:06:34.099 "framework_get_config", 00:06:34.099 "framework_get_subsystems", 00:06:34.099 "trace_get_info", 00:06:34.099 "trace_get_tpoint_group_mask", 00:06:34.099 "trace_disable_tpoint_group", 00:06:34.099 "trace_enable_tpoint_group", 00:06:34.099 "trace_clear_tpoint_mask", 00:06:34.099 "trace_set_tpoint_mask", 00:06:34.099 "keyring_get_keys", 00:06:34.099 "spdk_get_version", 00:06:34.099 "rpc_get_methods" 00:06:34.099 ] 00:06:34.099 20:53:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.099 20:53:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:34.099 20:53:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3362511 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3362511 ']' 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3362511 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3362511 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3362511' 00:06:34.099 killing process with pid 3362511 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3362511 00:06:34.099 20:53:24 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3362511 00:06:34.358 00:06:34.358 real 0m1.494s 00:06:34.358 user 0m2.743s 00:06:34.358 sys 0m0.507s 00:06:34.358 20:53:25 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.358 20:53:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.358 ************************************ 00:06:34.358 END TEST spdkcli_tcp 00:06:34.358 ************************************ 00:06:34.358 20:53:25 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.358 20:53:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.358 20:53:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.358 20:53:25 -- common/autotest_common.sh@10 -- # set +x 00:06:34.617 ************************************ 00:06:34.617 START TEST dpdk_mem_utility 00:06:34.617 ************************************ 00:06:34.617 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.617 * Looking for test storage... 00:06:34.617 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:34.617 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:34.617 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3362845 00:06:34.617 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.618 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3362845 00:06:34.618 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3362845 ']' 00:06:34.618 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.618 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.618 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.618 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.618 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.618 [2024-07-13 20:53:25.432294] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:34.618 [2024-07-13 20:53:25.432345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3362845 ] 00:06:34.618 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.618 [2024-07-13 20:53:25.505654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.877 [2024-07-13 20:53:25.544380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.877 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.877 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:34.877 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:34.877 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:34.877 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.877 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.877 { 00:06:34.877 "filename": "/tmp/spdk_mem_dump.txt" 00:06:34.877 } 00:06:34.877 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.877 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:35.137 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:35.137 1 heaps totaling size 814.000000 MiB 00:06:35.137 size: 814.000000 MiB heap id: 0 00:06:35.137 end heaps---------- 00:06:35.137 8 mempools totaling size 598.116089 MiB 00:06:35.137 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:35.137 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:35.137 size: 84.521057 MiB name: bdev_io_3362845 00:06:35.137 size: 51.011292 MiB name: evtpool_3362845 00:06:35.137 size: 50.003479 MiB name: msgpool_3362845 00:06:35.137 size: 21.763794 MiB name: PDU_Pool 00:06:35.137 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:35.137 size: 0.026123 MiB name: Session_Pool 00:06:35.137 end mempools------- 00:06:35.137 6 memzones totaling size 4.142822 MiB 00:06:35.137 size: 1.000366 MiB name: RG_ring_0_3362845 00:06:35.137 size: 1.000366 MiB name: RG_ring_1_3362845 00:06:35.137 size: 1.000366 MiB name: RG_ring_4_3362845 00:06:35.137 size: 1.000366 MiB name: RG_ring_5_3362845 00:06:35.137 size: 0.125366 MiB name: RG_ring_2_3362845 00:06:35.137 size: 0.015991 MiB name: RG_ring_3_3362845 00:06:35.137 end memzones------- 00:06:35.137 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:35.137 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:35.137 list of free elements. size: 12.519348 MiB 00:06:35.137 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:35.137 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:35.137 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:35.137 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:35.137 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:35.137 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:35.137 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:35.137 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:35.137 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:35.137 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:35.137 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:35.137 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:35.137 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:35.137 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:35.137 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:35.137 list of standard malloc elements. size: 199.218079 MiB 00:06:35.137 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:35.137 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:35.137 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:35.137 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:35.137 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:35.137 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:35.137 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:35.137 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:35.137 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:35.137 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:35.137 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:35.137 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:35.137 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:35.137 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:35.137 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:35.137 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:35.137 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:35.137 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:35.137 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:35.137 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:35.137 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:35.137 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:35.137 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:35.138 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:35.138 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:35.138 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:35.138 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:35.138 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:35.138 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:35.138 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:35.138 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:35.138 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:35.138 list of memzone associated elements. size: 602.262573 MiB 00:06:35.138 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:35.138 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:35.138 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:35.138 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:35.138 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:35.138 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3362845_0 00:06:35.138 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:35.138 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3362845_0 00:06:35.138 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:35.138 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3362845_0 00:06:35.138 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:35.138 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:35.138 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:35.138 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:35.138 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:35.138 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3362845 00:06:35.138 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:35.138 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3362845 00:06:35.138 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:35.138 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3362845 00:06:35.138 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:35.138 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:35.138 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:35.138 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:35.138 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:35.138 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:35.138 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:35.138 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:35.138 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:35.138 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3362845 00:06:35.138 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:35.138 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3362845 00:06:35.138 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:35.138 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3362845 00:06:35.138 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:35.138 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3362845 00:06:35.138 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:35.138 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3362845 00:06:35.138 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:35.138 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:35.138 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:35.138 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:35.138 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:35.138 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:35.138 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:35.138 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3362845 00:06:35.138 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:35.138 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:35.138 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:35.138 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:35.138 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:35.138 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3362845 00:06:35.138 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:35.138 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:35.138 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:35.138 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3362845 00:06:35.138 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:35.138 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3362845 00:06:35.138 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:35.138 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:35.138 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:35.138 20:53:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3362845 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3362845 ']' 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3362845 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3362845 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3362845' 00:06:35.138 killing process with pid 3362845 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3362845 00:06:35.138 20:53:25 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3362845 00:06:35.397 00:06:35.397 real 0m0.916s 00:06:35.397 user 0m0.838s 00:06:35.397 sys 0m0.412s 00:06:35.397 20:53:26 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.397 20:53:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.397 ************************************ 00:06:35.397 END TEST dpdk_mem_utility 00:06:35.397 ************************************ 00:06:35.397 20:53:26 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:35.397 20:53:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.397 20:53:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.397 20:53:26 -- common/autotest_common.sh@10 -- # set +x 00:06:35.397 ************************************ 00:06:35.397 START TEST event 00:06:35.397 ************************************ 00:06:35.397 20:53:26 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:35.656 * Looking for test storage... 00:06:35.656 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:35.656 20:53:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:35.656 20:53:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:35.656 20:53:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:35.656 20:53:26 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:35.656 20:53:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.656 20:53:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.656 ************************************ 00:06:35.656 START TEST event_perf 00:06:35.656 ************************************ 00:06:35.657 20:53:26 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:35.657 Running I/O for 1 seconds...[2024-07-13 20:53:26.440517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:35.657 [2024-07-13 20:53:26.440598] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3363163 ] 00:06:35.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.657 [2024-07-13 20:53:26.514593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.915 [2024-07-13 20:53:26.556257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.915 [2024-07-13 20:53:26.556353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.915 [2024-07-13 20:53:26.556440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.915 [2024-07-13 20:53:26.556442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.852 Running I/O for 1 seconds... 00:06:36.852 lcore 0: 206001 00:06:36.852 lcore 1: 206001 00:06:36.852 lcore 2: 205999 00:06:36.852 lcore 3: 206002 00:06:36.852 done. 00:06:36.852 00:06:36.852 real 0m1.196s 00:06:36.852 user 0m4.094s 00:06:36.852 sys 0m0.097s 00:06:36.852 20:53:27 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.852 20:53:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.853 ************************************ 00:06:36.853 END TEST event_perf 00:06:36.853 ************************************ 00:06:36.853 20:53:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:36.853 20:53:27 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:36.853 20:53:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.853 20:53:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.853 ************************************ 00:06:36.853 START TEST event_reactor 00:06:36.853 ************************************ 00:06:36.853 20:53:27 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:36.853 [2024-07-13 20:53:27.720585] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:36.853 [2024-07-13 20:53:27.720653] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3363446 ] 00:06:37.113 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.113 [2024-07-13 20:53:27.791707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.113 [2024-07-13 20:53:27.828573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.117 test_start 00:06:38.117 oneshot 00:06:38.117 tick 100 00:06:38.117 tick 100 00:06:38.117 tick 250 00:06:38.117 tick 100 00:06:38.117 tick 100 00:06:38.117 tick 100 00:06:38.117 tick 250 00:06:38.117 tick 500 00:06:38.117 tick 100 00:06:38.117 tick 100 00:06:38.117 tick 250 00:06:38.117 tick 100 00:06:38.117 tick 100 00:06:38.117 test_end 00:06:38.117 00:06:38.117 real 0m1.189s 00:06:38.117 user 0m1.097s 00:06:38.117 sys 0m0.089s 00:06:38.118 20:53:28 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.118 20:53:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:38.118 ************************************ 00:06:38.118 END TEST event_reactor 00:06:38.118 ************************************ 00:06:38.118 20:53:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:38.118 20:53:28 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:38.118 20:53:28 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.118 20:53:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.118 ************************************ 00:06:38.118 START TEST event_reactor_perf 00:06:38.118 ************************************ 00:06:38.118 20:53:28 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:38.118 [2024-07-13 20:53:28.985940] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:38.118 [2024-07-13 20:53:28.986018] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3363736 ] 00:06:38.377 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.377 [2024-07-13 20:53:29.059195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.377 [2024-07-13 20:53:29.096700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.313 test_start 00:06:39.313 test_end 00:06:39.313 Performance: 522540 events per second 00:06:39.313 00:06:39.313 real 0m1.194s 00:06:39.313 user 0m1.100s 00:06:39.313 sys 0m0.091s 00:06:39.313 20:53:30 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.313 20:53:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.313 ************************************ 00:06:39.313 END TEST event_reactor_perf 00:06:39.313 ************************************ 00:06:39.313 20:53:30 event -- event/event.sh@49 -- # uname -s 00:06:39.571 20:53:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:39.571 20:53:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:39.571 20:53:30 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.571 20:53:30 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.571 20:53:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.571 ************************************ 00:06:39.571 START TEST event_scheduler 00:06:39.571 ************************************ 00:06:39.571 20:53:30 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:39.571 * Looking for test storage... 00:06:39.571 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:39.571 20:53:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:39.571 20:53:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3364016 00:06:39.571 20:53:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.571 20:53:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:39.571 20:53:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3364016 00:06:39.571 20:53:30 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3364016 ']' 00:06:39.571 20:53:30 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.571 20:53:30 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.571 20:53:30 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.571 20:53:30 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.571 20:53:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.571 [2024-07-13 20:53:30.396818] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:39.571 [2024-07-13 20:53:30.396870] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3364016 ] 00:06:39.571 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.830 [2024-07-13 20:53:30.464721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.830 [2024-07-13 20:53:30.505276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.830 [2024-07-13 20:53:30.505360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.830 [2024-07-13 20:53:30.505444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.830 [2024-07-13 20:53:30.505446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:39.830 20:53:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.830 POWER: Env isn't set yet! 00:06:39.830 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:39.830 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:39.830 POWER: Cannot set governor of lcore 0 to userspace 00:06:39.830 POWER: Attempting to initialise PSTAT power management... 00:06:39.830 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:39.830 POWER: Initialized successfully for lcore 0 power management 00:06:39.830 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:39.830 POWER: Initialized successfully for lcore 1 power management 00:06:39.830 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:39.830 POWER: Initialized successfully for lcore 2 power management 00:06:39.830 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:39.830 POWER: Initialized successfully for lcore 3 power management 00:06:39.830 [2024-07-13 20:53:30.583264] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:39.830 [2024-07-13 20:53:30.583282] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:39.830 [2024-07-13 20:53:30.583295] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.830 20:53:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.830 [2024-07-13 20:53:30.647502] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.830 20:53:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.830 20:53:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.830 ************************************ 00:06:39.830 START TEST scheduler_create_thread 00:06:39.830 ************************************ 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.830 2 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.830 3 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.830 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.830 4 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.089 5 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.089 6 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.089 7 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.089 8 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.089 9 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.089 10 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.089 20:53:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.024 20:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.024 20:53:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:41.024 20:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.024 20:53:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.398 20:53:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.398 20:53:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:42.398 20:53:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:42.398 20:53:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.398 20:53:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.330 20:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.330 00:06:43.330 real 0m3.382s 00:06:43.330 user 0m0.021s 00:06:43.330 sys 0m0.009s 00:06:43.330 20:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.330 20:53:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.330 ************************************ 00:06:43.330 END TEST scheduler_create_thread 00:06:43.330 ************************************ 00:06:43.330 20:53:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:43.330 20:53:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3364016 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3364016 ']' 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3364016 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3364016 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3364016' 00:06:43.330 killing process with pid 3364016 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3364016 00:06:43.330 20:53:34 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3364016 00:06:43.588 [2024-07-13 20:53:34.451450] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:43.846 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:43.846 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:43.846 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:43.846 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:43.846 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:43.847 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:43.847 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:43.847 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:43.847 00:06:43.847 real 0m4.426s 00:06:43.847 user 0m7.777s 00:06:43.847 sys 0m0.396s 00:06:43.847 20:53:34 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.847 20:53:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.847 ************************************ 00:06:43.847 END TEST event_scheduler 00:06:43.847 ************************************ 00:06:43.847 20:53:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:43.847 20:53:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:43.847 20:53:34 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.847 20:53:34 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.847 20:53:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.106 ************************************ 00:06:44.106 START TEST app_repeat 00:06:44.106 ************************************ 00:06:44.106 20:53:34 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3364762 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3364762' 00:06:44.107 Process app_repeat pid: 3364762 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:44.107 spdk_app_start Round 0 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3364762 /var/tmp/spdk-nbd.sock 00:06:44.107 20:53:34 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3364762 ']' 00:06:44.107 20:53:34 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.107 20:53:34 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.107 20:53:34 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.107 20:53:34 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.107 20:53:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.107 [2024-07-13 20:53:34.801188] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:44.107 [2024-07-13 20:53:34.801249] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3364762 ] 00:06:44.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.107 [2024-07-13 20:53:34.876670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.107 [2024-07-13 20:53:34.917035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.107 [2024-07-13 20:53:34.917038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.107 20:53:34 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.107 20:53:34 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:44.107 20:53:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.366 Malloc0 00:06:44.366 20:53:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.625 Malloc1 00:06:44.625 20:53:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.625 20:53:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.884 /dev/nbd0 00:06:44.884 20:53:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.884 20:53:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.884 1+0 records in 00:06:44.884 1+0 records out 00:06:44.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267532 s, 15.3 MB/s 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:44.884 20:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.884 20:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.884 20:53:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.884 /dev/nbd1 00:06:44.884 20:53:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.884 20:53:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.884 1+0 records in 00:06:44.884 1+0 records out 00:06:44.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212545 s, 19.3 MB/s 00:06:44.884 20:53:35 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:45.143 20:53:35 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:45.143 20:53:35 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:45.143 20:53:35 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:45.143 20:53:35 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.143 { 00:06:45.143 "nbd_device": "/dev/nbd0", 00:06:45.143 "bdev_name": "Malloc0" 00:06:45.143 }, 00:06:45.143 { 00:06:45.143 "nbd_device": "/dev/nbd1", 00:06:45.143 "bdev_name": "Malloc1" 00:06:45.143 } 00:06:45.143 ]' 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.143 { 00:06:45.143 "nbd_device": "/dev/nbd0", 00:06:45.143 "bdev_name": "Malloc0" 00:06:45.143 }, 00:06:45.143 { 00:06:45.143 "nbd_device": "/dev/nbd1", 00:06:45.143 "bdev_name": "Malloc1" 00:06:45.143 } 00:06:45.143 ]' 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.143 /dev/nbd1' 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.143 /dev/nbd1' 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.143 20:53:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.143 256+0 records in 00:06:45.143 256+0 records out 00:06:45.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107245 s, 97.8 MB/s 00:06:45.143 20:53:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.143 20:53:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.143 256+0 records in 00:06:45.143 256+0 records out 00:06:45.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164439 s, 63.8 MB/s 00:06:45.143 20:53:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.143 20:53:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.402 256+0 records in 00:06:45.402 256+0 records out 00:06:45.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196306 s, 53.4 MB/s 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.402 20:53:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.403 20:53:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.661 20:53:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.920 20:53:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.920 20:53:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.178 20:53:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.178 [2024-07-13 20:53:37.031455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.178 [2024-07-13 20:53:37.066194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.178 [2024-07-13 20:53:37.066197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.436 [2024-07-13 20:53:37.107518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.436 [2024-07-13 20:53:37.107559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.725 20:53:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.725 20:53:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:49.725 spdk_app_start Round 1 00:06:49.725 20:53:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3364762 /var/tmp/spdk-nbd.sock 00:06:49.725 20:53:39 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3364762 ']' 00:06:49.725 20:53:39 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.725 20:53:39 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.725 20:53:39 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.725 20:53:39 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.725 20:53:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.725 20:53:40 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.725 20:53:40 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:49.725 20:53:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.725 Malloc0 00:06:49.725 20:53:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.725 Malloc1 00:06:49.725 20:53:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.725 20:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.726 20:53:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.726 /dev/nbd0 00:06:49.726 20:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.726 20:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.726 1+0 records in 00:06:49.726 1+0 records out 00:06:49.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240546 s, 17.0 MB/s 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:49.726 20:53:40 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:49.726 20:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.726 20:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.726 20:53:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.984 /dev/nbd1 00:06:49.984 20:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.984 20:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.984 20:53:40 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:49.984 20:53:40 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.985 1+0 records in 00:06:49.985 1+0 records out 00:06:49.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257944 s, 15.9 MB/s 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:49.985 20:53:40 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:49.985 20:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.985 20:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.985 20:53:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.985 20:53:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.985 20:53:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.244 { 00:06:50.244 "nbd_device": "/dev/nbd0", 00:06:50.244 "bdev_name": "Malloc0" 00:06:50.244 }, 00:06:50.244 { 00:06:50.244 "nbd_device": "/dev/nbd1", 00:06:50.244 "bdev_name": "Malloc1" 00:06:50.244 } 00:06:50.244 ]' 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.244 { 00:06:50.244 "nbd_device": "/dev/nbd0", 00:06:50.244 "bdev_name": "Malloc0" 00:06:50.244 }, 00:06:50.244 { 00:06:50.244 "nbd_device": "/dev/nbd1", 00:06:50.244 "bdev_name": "Malloc1" 00:06:50.244 } 00:06:50.244 ]' 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.244 /dev/nbd1' 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.244 /dev/nbd1' 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.244 256+0 records in 00:06:50.244 256+0 records out 00:06:50.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113839 s, 92.1 MB/s 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.244 20:53:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.244 256+0 records in 00:06:50.244 256+0 records out 00:06:50.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153573 s, 68.3 MB/s 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.244 256+0 records in 00:06:50.244 256+0 records out 00:06:50.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166576 s, 62.9 MB/s 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.244 20:53:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.503 20:53:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.763 20:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.022 20:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.022 20:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.022 20:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.022 20:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.022 20:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.022 20:53:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.022 20:53:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.022 20:53:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.022 20:53:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.023 20:53:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.023 20:53:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.281 [2024-07-13 20:53:42.032962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.281 [2024-07-13 20:53:42.068362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.281 [2024-07-13 20:53:42.068365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.281 [2024-07-13 20:53:42.110728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.281 [2024-07-13 20:53:42.110771] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.568 20:53:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.568 20:53:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:54.568 spdk_app_start Round 2 00:06:54.568 20:53:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3364762 /var/tmp/spdk-nbd.sock 00:06:54.568 20:53:44 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3364762 ']' 00:06:54.568 20:53:44 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.568 20:53:44 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.568 20:53:44 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.568 20:53:44 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.568 20:53:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.568 20:53:45 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.568 20:53:45 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:54.568 20:53:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.568 Malloc0 00:06:54.568 20:53:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.568 Malloc1 00:06:54.568 20:53:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.568 20:53:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.828 /dev/nbd0 00:06:54.828 20:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.828 20:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.828 1+0 records in 00:06:54.828 1+0 records out 00:06:54.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225755 s, 18.1 MB/s 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:54.828 20:53:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:54.828 20:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.828 20:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.828 20:53:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.087 /dev/nbd1 00:06:55.087 20:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.087 20:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.087 1+0 records in 00:06:55.087 1+0 records out 00:06:55.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272035 s, 15.1 MB/s 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:55.087 20:53:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:55.087 20:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.087 20:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.087 20:53:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.087 20:53:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.087 20:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.087 20:53:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.087 { 00:06:55.087 "nbd_device": "/dev/nbd0", 00:06:55.087 "bdev_name": "Malloc0" 00:06:55.087 }, 00:06:55.087 { 00:06:55.087 "nbd_device": "/dev/nbd1", 00:06:55.087 "bdev_name": "Malloc1" 00:06:55.087 } 00:06:55.087 ]' 00:06:55.347 20:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.347 { 00:06:55.347 "nbd_device": "/dev/nbd0", 00:06:55.347 "bdev_name": "Malloc0" 00:06:55.347 }, 00:06:55.347 { 00:06:55.347 "nbd_device": "/dev/nbd1", 00:06:55.347 "bdev_name": "Malloc1" 00:06:55.347 } 00:06:55.347 ]' 00:06:55.347 20:53:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.347 /dev/nbd1' 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.347 /dev/nbd1' 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.347 256+0 records in 00:06:55.347 256+0 records out 00:06:55.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010306 s, 102 MB/s 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.347 256+0 records in 00:06:55.347 256+0 records out 00:06:55.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196476 s, 53.4 MB/s 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.347 256+0 records in 00:06:55.347 256+0 records out 00:06:55.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208721 s, 50.2 MB/s 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.347 20:53:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.607 20:53:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.866 20:53:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.867 20:53:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.867 20:53:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.867 20:53:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.867 20:53:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:56.126 20:53:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:56.385 [2024-07-13 20:53:47.090293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.385 [2024-07-13 20:53:47.125451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.385 [2024-07-13 20:53:47.125454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.385 [2024-07-13 20:53:47.166960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:56.385 [2024-07-13 20:53:47.167006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.744 20:53:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3364762 /var/tmp/spdk-nbd.sock 00:06:59.744 20:53:49 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3364762 ']' 00:06:59.744 20:53:49 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.744 20:53:49 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.744 20:53:49 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.744 20:53:49 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.744 20:53:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:59.744 20:53:50 event.app_repeat -- event/event.sh@39 -- # killprocess 3364762 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3364762 ']' 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3364762 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3364762 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3364762' 00:06:59.744 killing process with pid 3364762 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3364762 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3364762 00:06:59.744 spdk_app_start is called in Round 0. 00:06:59.744 Shutdown signal received, stop current app iteration 00:06:59.744 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:59.744 spdk_app_start is called in Round 1. 00:06:59.744 Shutdown signal received, stop current app iteration 00:06:59.744 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:59.744 spdk_app_start is called in Round 2. 00:06:59.744 Shutdown signal received, stop current app iteration 00:06:59.744 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:59.744 spdk_app_start is called in Round 3. 00:06:59.744 Shutdown signal received, stop current app iteration 00:06:59.744 20:53:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:59.744 20:53:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:59.744 00:06:59.744 real 0m15.544s 00:06:59.744 user 0m33.140s 00:06:59.744 sys 0m2.905s 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.744 20:53:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.744 ************************************ 00:06:59.744 END TEST app_repeat 00:06:59.744 ************************************ 00:06:59.744 20:53:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:59.744 20:53:50 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:59.745 20:53:50 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.745 20:53:50 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.745 20:53:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.745 ************************************ 00:06:59.745 START TEST cpu_locks 00:06:59.745 ************************************ 00:06:59.745 20:53:50 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:59.745 * Looking for test storage... 00:06:59.745 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:59.745 20:53:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:59.745 20:53:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:59.745 20:53:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:59.745 20:53:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:59.745 20:53:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.745 20:53:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.745 20:53:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.745 ************************************ 00:06:59.745 START TEST default_locks 00:06:59.745 ************************************ 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3367790 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3367790 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3367790 ']' 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.745 20:53:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.745 [2024-07-13 20:53:50.577441] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:59.745 [2024-07-13 20:53:50.577486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3367790 ] 00:06:59.745 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.015 [2024-07-13 20:53:50.646715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.016 [2024-07-13 20:53:50.685776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.590 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.590 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:00.590 20:53:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3367790 00:07:00.590 20:53:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3367790 00:07:00.590 20:53:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.157 lslocks: write error 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3367790 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3367790 ']' 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3367790 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3367790 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3367790' 00:07:01.157 killing process with pid 3367790 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3367790 00:07:01.157 20:53:51 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3367790 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3367790 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3367790 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3367790 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3367790 ']' 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.416 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3367790) - No such process 00:07:01.416 ERROR: process (pid: 3367790) is no longer running 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:01.416 00:07:01.416 real 0m1.703s 00:07:01.416 user 0m1.778s 00:07:01.416 sys 0m0.609s 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.416 20:53:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.416 ************************************ 00:07:01.416 END TEST default_locks 00:07:01.416 ************************************ 00:07:01.416 20:53:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:01.416 20:53:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.416 20:53:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.416 20:53:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.676 ************************************ 00:07:01.676 START TEST default_locks_via_rpc 00:07:01.676 ************************************ 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3368090 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3368090 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3368090 ']' 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.676 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.676 [2024-07-13 20:53:52.366225] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.676 [2024-07-13 20:53:52.366275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368090 ] 00:07:01.676 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.676 [2024-07-13 20:53:52.437778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.676 [2024-07-13 20:53:52.477050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3368090 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3368090 00:07:01.936 20:53:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.195 20:53:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3368090 00:07:02.195 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3368090 ']' 00:07:02.195 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3368090 00:07:02.195 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:02.195 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.195 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3368090 00:07:02.454 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.454 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.454 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3368090' 00:07:02.454 killing process with pid 3368090 00:07:02.454 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3368090 00:07:02.454 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3368090 00:07:02.713 00:07:02.713 real 0m1.088s 00:07:02.713 user 0m1.039s 00:07:02.713 sys 0m0.525s 00:07:02.713 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.713 20:53:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.713 ************************************ 00:07:02.713 END TEST default_locks_via_rpc 00:07:02.713 ************************************ 00:07:02.713 20:53:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:02.713 20:53:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.713 20:53:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.713 20:53:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.713 ************************************ 00:07:02.713 START TEST non_locking_app_on_locked_coremask 00:07:02.713 ************************************ 00:07:02.713 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:02.713 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3368333 00:07:02.713 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3368333 /var/tmp/spdk.sock 00:07:02.713 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.713 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3368333 ']' 00:07:02.713 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.713 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.714 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.714 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.714 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.714 [2024-07-13 20:53:53.538291] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:02.714 [2024-07-13 20:53:53.538342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368333 ] 00:07:02.714 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.972 [2024-07-13 20:53:53.608284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.972 [2024-07-13 20:53:53.647667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3368387 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3368387 /var/tmp/spdk2.sock 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3368387 ']' 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.972 20:53:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.231 [2024-07-13 20:53:53.881249] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.231 [2024-07-13 20:53:53.881303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368387 ] 00:07:03.231 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.231 [2024-07-13 20:53:53.977931] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.231 [2024-07-13 20:53:53.977952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.231 [2024-07-13 20:53:54.052188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.797 20:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.797 20:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:03.797 20:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3368333 00:07:03.797 20:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3368333 00:07:03.797 20:53:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.175 lslocks: write error 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3368333 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3368333 ']' 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3368333 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3368333 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3368333' 00:07:05.175 killing process with pid 3368333 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3368333 00:07:05.175 20:53:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3368333 00:07:05.434 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3368387 00:07:05.434 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3368387 ']' 00:07:05.434 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3368387 00:07:05.434 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:05.434 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.434 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3368387 00:07:05.693 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.693 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.693 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3368387' 00:07:05.693 killing process with pid 3368387 00:07:05.693 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3368387 00:07:05.693 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3368387 00:07:05.952 00:07:05.952 real 0m3.151s 00:07:05.952 user 0m3.270s 00:07:05.952 sys 0m1.170s 00:07:05.952 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.952 20:53:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.952 ************************************ 00:07:05.952 END TEST non_locking_app_on_locked_coremask 00:07:05.952 ************************************ 00:07:05.952 20:53:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:05.952 20:53:56 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.952 20:53:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.952 20:53:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.952 ************************************ 00:07:05.952 START TEST locking_app_on_unlocked_coremask 00:07:05.952 ************************************ 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3368949 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3368949 /var/tmp/spdk.sock 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3368949 ']' 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:05.952 20:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.952 [2024-07-13 20:53:56.772350] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:05.952 [2024-07-13 20:53:56.772397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368949 ] 00:07:05.952 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.211 [2024-07-13 20:53:56.843827] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.211 [2024-07-13 20:53:56.843851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.211 [2024-07-13 20:53:56.882569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3368970 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3368970 /var/tmp/spdk2.sock 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3368970 ']' 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.780 20:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.780 [2024-07-13 20:53:57.614925] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:06.780 [2024-07-13 20:53:57.614979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368970 ] 00:07:06.780 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.039 [2024-07-13 20:53:57.711332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.039 [2024-07-13 20:53:57.785842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.606 20:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.606 20:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:07.606 20:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3368970 00:07:07.606 20:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3368970 00:07:07.606 20:53:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.542 lslocks: write error 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3368949 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3368949 ']' 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3368949 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3368949 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3368949' 00:07:08.542 killing process with pid 3368949 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3368949 00:07:08.542 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3368949 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3368970 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3368970 ']' 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3368970 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3368970 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3368970' 00:07:09.109 killing process with pid 3368970 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3368970 00:07:09.109 20:53:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3368970 00:07:09.368 00:07:09.368 real 0m3.488s 00:07:09.368 user 0m3.741s 00:07:09.368 sys 0m1.087s 00:07:09.368 20:54:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.368 20:54:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.368 ************************************ 00:07:09.368 END TEST locking_app_on_unlocked_coremask 00:07:09.368 ************************************ 00:07:09.368 20:54:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:09.368 20:54:00 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.368 20:54:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.368 20:54:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.628 ************************************ 00:07:09.628 START TEST locking_app_on_locked_coremask 00:07:09.628 ************************************ 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3369552 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3369552 /var/tmp/spdk.sock 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3369552 ']' 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.628 20:54:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.628 [2024-07-13 20:54:00.345845] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:09.628 [2024-07-13 20:54:00.345890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369552 ] 00:07:09.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.628 [2024-07-13 20:54:00.415887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.628 [2024-07-13 20:54:00.455398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.565 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.565 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:10.565 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3369779 00:07:10.565 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3369779 /var/tmp/spdk2.sock 00:07:10.565 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:10.565 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3369779 /var/tmp/spdk2.sock 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3369779 /var/tmp/spdk2.sock 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3369779 ']' 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.566 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.566 [2024-07-13 20:54:01.186332] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:10.566 [2024-07-13 20:54:01.186390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369779 ] 00:07:10.566 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.566 [2024-07-13 20:54:01.282038] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3369552 has claimed it. 00:07:10.566 [2024-07-13 20:54:01.282078] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.134 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3369779) - No such process 00:07:11.134 ERROR: process (pid: 3369779) is no longer running 00:07:11.134 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.134 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:11.134 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:11.134 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.134 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.134 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.134 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3369552 00:07:11.134 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3369552 00:07:11.134 20:54:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.393 lslocks: write error 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3369552 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3369552 ']' 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3369552 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3369552 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3369552' 00:07:11.393 killing process with pid 3369552 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3369552 00:07:11.393 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3369552 00:07:11.652 00:07:11.652 real 0m2.194s 00:07:11.652 user 0m2.408s 00:07:11.652 sys 0m0.633s 00:07:11.652 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.652 20:54:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.652 ************************************ 00:07:11.652 END TEST locking_app_on_locked_coremask 00:07:11.652 ************************************ 00:07:11.652 20:54:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:11.652 20:54:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.652 20:54:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.652 20:54:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.911 ************************************ 00:07:11.911 START TEST locking_overlapped_coremask 00:07:11.911 ************************************ 00:07:11.911 20:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:11.911 20:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3370074 00:07:11.911 20:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:11.911 20:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3370074 /var/tmp/spdk.sock 00:07:11.911 20:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3370074 ']' 00:07:11.911 20:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.911 20:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:11.912 20:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.912 20:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:11.912 20:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.912 [2024-07-13 20:54:02.618570] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:11.912 [2024-07-13 20:54:02.618617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370074 ] 00:07:11.912 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.912 [2024-07-13 20:54:02.688972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.912 [2024-07-13 20:54:02.730471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.912 [2024-07-13 20:54:02.730493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.912 [2024-07-13 20:54:02.730496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.848 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:12.848 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:12.848 20:54:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3370230 00:07:12.848 20:54:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3370230 /var/tmp/spdk2.sock 00:07:12.848 20:54:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:12.848 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3370230 /var/tmp/spdk2.sock 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3370230 /var/tmp/spdk2.sock 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3370230 ']' 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.849 20:54:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 [2024-07-13 20:54:03.475523] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:12.849 [2024-07-13 20:54:03.475572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370230 ] 00:07:12.849 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.849 [2024-07-13 20:54:03.577035] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3370074 has claimed it. 00:07:12.849 [2024-07-13 20:54:03.577073] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:13.417 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3370230) - No such process 00:07:13.417 ERROR: process (pid: 3370230) is no longer running 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3370074 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3370074 ']' 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3370074 00:07:13.417 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:13.418 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:13.418 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3370074 00:07:13.418 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:13.418 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:13.418 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3370074' 00:07:13.418 killing process with pid 3370074 00:07:13.418 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3370074 00:07:13.418 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3370074 00:07:13.678 00:07:13.678 real 0m1.886s 00:07:13.678 user 0m5.325s 00:07:13.678 sys 0m0.488s 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 ************************************ 00:07:13.678 END TEST locking_overlapped_coremask 00:07:13.678 ************************************ 00:07:13.678 20:54:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:13.678 20:54:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:13.678 20:54:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.678 20:54:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 ************************************ 00:07:13.678 START TEST locking_overlapped_coremask_via_rpc 00:07:13.678 ************************************ 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3370687 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3370687 /var/tmp/spdk.sock 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3370687 ']' 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.678 20:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.937 [2024-07-13 20:54:04.585953] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:13.937 [2024-07-13 20:54:04.586000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370687 ] 00:07:13.937 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.937 [2024-07-13 20:54:04.657756] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.937 [2024-07-13 20:54:04.657779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.937 [2024-07-13 20:54:04.698805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.937 [2024-07-13 20:54:04.698898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.937 [2024-07-13 20:54:04.698900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3370869 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3370869 /var/tmp/spdk2.sock 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3370869 ']' 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.876 20:54:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.876 [2024-07-13 20:54:05.451690] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:14.876 [2024-07-13 20:54:05.451747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370869 ] 00:07:14.876 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.876 [2024-07-13 20:54:05.557242] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.876 [2024-07-13 20:54:05.557269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.876 [2024-07-13 20:54:05.639596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.876 [2024-07-13 20:54:05.643061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.876 [2024-07-13 20:54:05.643062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.445 [2024-07-13 20:54:06.268085] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3370687 has claimed it. 00:07:15.445 request: 00:07:15.445 { 00:07:15.445 "method": "framework_enable_cpumask_locks", 00:07:15.445 "req_id": 1 00:07:15.445 } 00:07:15.445 Got JSON-RPC error response 00:07:15.445 response: 00:07:15.445 { 00:07:15.445 "code": -32603, 00:07:15.445 "message": "Failed to claim CPU core: 2" 00:07:15.445 } 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3370687 /var/tmp/spdk.sock 00:07:15.445 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3370687 ']' 00:07:15.446 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.446 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.446 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.446 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.446 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.705 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.705 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:15.705 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3370869 /var/tmp/spdk2.sock 00:07:15.705 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3370869 ']' 00:07:15.705 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.705 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.705 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.705 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.705 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.965 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.965 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:15.965 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:15.965 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.965 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.965 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.965 00:07:15.965 real 0m2.107s 00:07:15.965 user 0m0.847s 00:07:15.965 sys 0m0.191s 00:07:15.965 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.965 20:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.965 ************************************ 00:07:15.965 END TEST locking_overlapped_coremask_via_rpc 00:07:15.965 ************************************ 00:07:15.965 20:54:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:15.965 20:54:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3370687 ]] 00:07:15.965 20:54:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3370687 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3370687 ']' 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3370687 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3370687 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3370687' 00:07:15.965 killing process with pid 3370687 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3370687 00:07:15.965 20:54:06 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3370687 00:07:16.225 20:54:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3370869 ]] 00:07:16.225 20:54:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3370869 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3370869 ']' 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3370869 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3370869 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3370869' 00:07:16.225 killing process with pid 3370869 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3370869 00:07:16.225 20:54:07 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3370869 00:07:16.794 20:54:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:16.794 20:54:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:16.794 20:54:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3370687 ]] 00:07:16.794 20:54:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3370687 00:07:16.794 20:54:07 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3370687 ']' 00:07:16.794 20:54:07 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3370687 00:07:16.794 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3370687) - No such process 00:07:16.794 20:54:07 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3370687 is not found' 00:07:16.794 Process with pid 3370687 is not found 00:07:16.794 20:54:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3370869 ]] 00:07:16.794 20:54:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3370869 00:07:16.794 20:54:07 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3370869 ']' 00:07:16.794 20:54:07 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3370869 00:07:16.794 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3370869) - No such process 00:07:16.794 20:54:07 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3370869 is not found' 00:07:16.794 Process with pid 3370869 is not found 00:07:16.794 20:54:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:16.794 00:07:16.794 real 0m17.026s 00:07:16.794 user 0m29.075s 00:07:16.794 sys 0m5.768s 00:07:16.794 20:54:07 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.794 20:54:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.794 ************************************ 00:07:16.794 END TEST cpu_locks 00:07:16.794 ************************************ 00:07:16.794 00:07:16.794 real 0m41.171s 00:07:16.794 user 1m16.478s 00:07:16.794 sys 0m9.795s 00:07:16.794 20:54:07 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.794 20:54:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.794 ************************************ 00:07:16.794 END TEST event 00:07:16.794 ************************************ 00:07:16.794 20:54:07 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:16.794 20:54:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:16.794 20:54:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.794 20:54:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.794 ************************************ 00:07:16.794 START TEST thread 00:07:16.794 ************************************ 00:07:16.794 20:54:07 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:16.794 * Looking for test storage... 00:07:16.794 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:16.794 20:54:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:16.794 20:54:07 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:16.794 20:54:07 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.794 20:54:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.794 ************************************ 00:07:16.794 START TEST thread_poller_perf 00:07:16.794 ************************************ 00:07:16.794 20:54:07 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:17.101 [2024-07-13 20:54:07.693023] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:17.101 [2024-07-13 20:54:07.693109] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371585 ] 00:07:17.101 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.101 [2024-07-13 20:54:07.766328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.101 [2024-07-13 20:54:07.804625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.101 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:18.038 ====================================== 00:07:18.038 busy:2506480516 (cyc) 00:07:18.038 total_run_count: 428000 00:07:18.038 tsc_hz: 2500000000 (cyc) 00:07:18.038 ====================================== 00:07:18.038 poller_cost: 5856 (cyc), 2342 (nsec) 00:07:18.038 00:07:18.038 real 0m1.198s 00:07:18.038 user 0m1.104s 00:07:18.038 sys 0m0.090s 00:07:18.038 20:54:08 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.038 20:54:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.038 ************************************ 00:07:18.038 END TEST thread_poller_perf 00:07:18.038 ************************************ 00:07:18.038 20:54:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:18.038 20:54:08 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:18.038 20:54:08 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.038 20:54:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.297 ************************************ 00:07:18.297 START TEST thread_poller_perf 00:07:18.297 ************************************ 00:07:18.297 20:54:08 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:18.297 [2024-07-13 20:54:08.978416] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:18.297 [2024-07-13 20:54:08.978498] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371823 ] 00:07:18.297 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.297 [2024-07-13 20:54:09.049566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.298 [2024-07-13 20:54:09.087329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.298 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:19.674 ====================================== 00:07:19.674 busy:2501747238 (cyc) 00:07:19.674 total_run_count: 5657000 00:07:19.674 tsc_hz: 2500000000 (cyc) 00:07:19.674 ====================================== 00:07:19.674 poller_cost: 442 (cyc), 176 (nsec) 00:07:19.674 00:07:19.674 real 0m1.195s 00:07:19.674 user 0m1.105s 00:07:19.674 sys 0m0.086s 00:07:19.674 20:54:10 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.674 20:54:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:19.674 ************************************ 00:07:19.674 END TEST thread_poller_perf 00:07:19.674 ************************************ 00:07:19.674 20:54:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:19.674 00:07:19.674 real 0m2.666s 00:07:19.674 user 0m2.308s 00:07:19.674 sys 0m0.373s 00:07:19.674 20:54:10 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.674 20:54:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.674 ************************************ 00:07:19.674 END TEST thread 00:07:19.674 ************************************ 00:07:19.674 20:54:10 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:19.674 20:54:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:19.674 20:54:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.674 20:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:19.674 ************************************ 00:07:19.674 START TEST accel 00:07:19.674 ************************************ 00:07:19.674 20:54:10 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:19.674 * Looking for test storage... 00:07:19.674 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:19.674 20:54:10 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:19.674 20:54:10 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:19.674 20:54:10 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:19.674 20:54:10 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3372102 00:07:19.674 20:54:10 accel -- accel/accel.sh@63 -- # waitforlisten 3372102 00:07:19.674 20:54:10 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:19.674 20:54:10 accel -- common/autotest_common.sh@827 -- # '[' -z 3372102 ']' 00:07:19.674 20:54:10 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.674 20:54:10 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:19.674 20:54:10 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:19.674 20:54:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.674 20:54:10 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.674 20:54:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.674 20:54:10 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:19.674 20:54:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.674 20:54:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.674 20:54:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.674 20:54:10 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.674 20:54:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:19.674 20:54:10 accel -- accel/accel.sh@41 -- # jq -r . 00:07:19.674 [2024-07-13 20:54:10.425578] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:19.674 [2024-07-13 20:54:10.425636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372102 ] 00:07:19.674 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.674 [2024-07-13 20:54:10.498121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.674 [2024-07-13 20:54:10.537357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@860 -- # return 0 00:07:19.933 20:54:10 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:19.933 20:54:10 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:19.933 20:54:10 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:19.933 20:54:10 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:19.933 20:54:10 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:19.933 20:54:10 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:19.933 20:54:10 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.933 20:54:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.933 20:54:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.933 20:54:10 accel -- accel/accel.sh@75 -- # killprocess 3372102 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@946 -- # '[' -z 3372102 ']' 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@950 -- # kill -0 3372102 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@951 -- # uname 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.933 20:54:10 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3372102 00:07:20.193 20:54:10 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:20.193 20:54:10 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:20.193 20:54:10 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3372102' 00:07:20.193 killing process with pid 3372102 00:07:20.193 20:54:10 accel -- common/autotest_common.sh@965 -- # kill 3372102 00:07:20.193 20:54:10 accel -- common/autotest_common.sh@970 -- # wait 3372102 00:07:20.452 20:54:11 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:20.453 20:54:11 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:20.453 20:54:11 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:20.453 20:54:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.453 20:54:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.453 20:54:11 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:20.453 20:54:11 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:20.453 20:54:11 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:20.453 20:54:11 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.453 20:54:11 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.453 20:54:11 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.453 20:54:11 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.453 20:54:11 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.453 20:54:11 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:20.453 20:54:11 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:20.453 20:54:11 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.453 20:54:11 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:20.453 20:54:11 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:20.453 20:54:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:20.453 20:54:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.453 20:54:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.453 ************************************ 00:07:20.453 START TEST accel_missing_filename 00:07:20.453 ************************************ 00:07:20.453 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:20.453 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:20.453 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:20.453 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.453 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.453 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.453 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.453 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:20.453 20:54:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:20.453 20:54:11 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:20.453 20:54:11 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.453 20:54:11 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.453 20:54:11 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.453 20:54:11 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.453 20:54:11 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.453 20:54:11 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:20.453 20:54:11 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:20.453 [2024-07-13 20:54:11.289748] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.453 [2024-07-13 20:54:11.289808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372257 ] 00:07:20.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.712 [2024-07-13 20:54:11.360071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.712 [2024-07-13 20:54:11.399208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.712 [2024-07-13 20:54:11.440153] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.712 [2024-07-13 20:54:11.499682] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:20.713 A filename is required. 00:07:20.713 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:20.713 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.713 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:20.713 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.713 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:20.713 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.713 00:07:20.713 real 0m0.302s 00:07:20.713 user 0m0.212s 00:07:20.713 sys 0m0.128s 00:07:20.713 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.713 20:54:11 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:20.713 ************************************ 00:07:20.713 END TEST accel_missing_filename 00:07:20.713 ************************************ 00:07:20.973 20:54:11 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:20.973 20:54:11 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:20.973 20:54:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.973 20:54:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.973 ************************************ 00:07:20.973 START TEST accel_compress_verify 00:07:20.973 ************************************ 00:07:20.973 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:20.973 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:20.973 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:20.973 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.973 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.973 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.973 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.973 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:20.973 20:54:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:20.973 20:54:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:20.973 20:54:11 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.973 20:54:11 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.973 20:54:11 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.973 20:54:11 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.973 20:54:11 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.973 20:54:11 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:20.973 20:54:11 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:20.973 [2024-07-13 20:54:11.675205] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.973 [2024-07-13 20:54:11.675268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372293 ] 00:07:20.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.973 [2024-07-13 20:54:11.746121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.973 [2024-07-13 20:54:11.785177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.973 [2024-07-13 20:54:11.826158] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.233 [2024-07-13 20:54:11.886231] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:21.233 00:07:21.233 Compression does not support the verify option, aborting. 00:07:21.233 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:21.233 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.233 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:21.233 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.233 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:21.233 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.233 00:07:21.233 real 0m0.304s 00:07:21.233 user 0m0.206s 00:07:21.233 sys 0m0.137s 00:07:21.233 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.233 20:54:11 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:21.233 ************************************ 00:07:21.233 END TEST accel_compress_verify 00:07:21.233 ************************************ 00:07:21.233 20:54:11 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:21.233 20:54:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:21.233 20:54:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.233 20:54:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.233 ************************************ 00:07:21.233 START TEST accel_wrong_workload 00:07:21.233 ************************************ 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:21.233 20:54:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:21.233 20:54:12 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:21.233 20:54:12 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.233 20:54:12 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.233 20:54:12 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.233 20:54:12 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.233 20:54:12 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.233 20:54:12 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:21.233 20:54:12 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:21.233 Unsupported workload type: foobar 00:07:21.233 [2024-07-13 20:54:12.059039] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:21.233 accel_perf options: 00:07:21.233 [-h help message] 00:07:21.233 [-q queue depth per core] 00:07:21.233 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:21.233 [-T number of threads per core 00:07:21.233 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:21.233 [-t time in seconds] 00:07:21.233 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:21.233 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:21.233 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:21.233 [-l for compress/decompress workloads, name of uncompressed input file 00:07:21.233 [-S for crc32c workload, use this seed value (default 0) 00:07:21.233 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:21.233 [-f for fill workload, use this BYTE value (default 255) 00:07:21.233 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:21.233 [-y verify result if this switch is on] 00:07:21.233 [-a tasks to allocate per core (default: same value as -q)] 00:07:21.233 Can be used to spread operations across a wider range of memory. 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.233 00:07:21.233 real 0m0.036s 00:07:21.233 user 0m0.016s 00:07:21.233 sys 0m0.019s 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.233 20:54:12 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:21.233 ************************************ 00:07:21.233 END TEST accel_wrong_workload 00:07:21.233 ************************************ 00:07:21.233 Error: writing output failed: Broken pipe 00:07:21.233 20:54:12 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:21.233 20:54:12 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:21.233 20:54:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.233 20:54:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.493 ************************************ 00:07:21.493 START TEST accel_negative_buffers 00:07:21.493 ************************************ 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:21.493 20:54:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:21.493 20:54:12 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:21.493 20:54:12 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.493 20:54:12 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.493 20:54:12 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.493 20:54:12 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.493 20:54:12 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.493 20:54:12 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:21.493 20:54:12 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:21.493 -x option must be non-negative. 00:07:21.493 [2024-07-13 20:54:12.171083] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:21.493 accel_perf options: 00:07:21.493 [-h help message] 00:07:21.493 [-q queue depth per core] 00:07:21.493 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:21.493 [-T number of threads per core 00:07:21.493 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:21.493 [-t time in seconds] 00:07:21.493 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:21.493 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:21.493 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:21.493 [-l for compress/decompress workloads, name of uncompressed input file 00:07:21.493 [-S for crc32c workload, use this seed value (default 0) 00:07:21.493 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:21.493 [-f for fill workload, use this BYTE value (default 255) 00:07:21.493 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:21.493 [-y verify result if this switch is on] 00:07:21.493 [-a tasks to allocate per core (default: same value as -q)] 00:07:21.493 Can be used to spread operations across a wider range of memory. 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.493 00:07:21.493 real 0m0.034s 00:07:21.493 user 0m0.019s 00:07:21.493 sys 0m0.015s 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.493 20:54:12 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:21.493 ************************************ 00:07:21.493 END TEST accel_negative_buffers 00:07:21.493 ************************************ 00:07:21.493 Error: writing output failed: Broken pipe 00:07:21.494 20:54:12 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:21.494 20:54:12 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:21.494 20:54:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.494 20:54:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.494 ************************************ 00:07:21.494 START TEST accel_crc32c 00:07:21.494 ************************************ 00:07:21.494 20:54:12 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:21.494 20:54:12 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:21.494 [2024-07-13 20:54:12.282207] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:21.494 [2024-07-13 20:54:12.282261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372597 ] 00:07:21.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.494 [2024-07-13 20:54:12.352242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.753 [2024-07-13 20:54:12.391620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.753 20:54:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:22.692 20:54:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.692 00:07:22.692 real 0m1.312s 00:07:22.692 user 0m1.205s 00:07:22.692 sys 0m0.123s 00:07:22.692 20:54:13 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.692 20:54:13 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:22.692 ************************************ 00:07:22.692 END TEST accel_crc32c 00:07:22.692 ************************************ 00:07:22.952 20:54:13 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:22.952 20:54:13 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:22.952 20:54:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.952 20:54:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.952 ************************************ 00:07:22.952 START TEST accel_crc32c_C2 00:07:22.952 ************************************ 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:22.952 [2024-07-13 20:54:13.672583] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:22.952 [2024-07-13 20:54:13.672637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372865 ] 00:07:22.952 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.952 [2024-07-13 20:54:13.744446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.952 [2024-07-13 20:54:13.782638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.952 20:54:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.328 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.329 00:07:24.329 real 0m1.310s 00:07:24.329 user 0m1.193s 00:07:24.329 sys 0m0.132s 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.329 20:54:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:24.329 ************************************ 00:07:24.329 END TEST accel_crc32c_C2 00:07:24.329 ************************************ 00:07:24.329 20:54:14 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:24.329 20:54:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:24.329 20:54:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.329 20:54:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.329 ************************************ 00:07:24.329 START TEST accel_copy 00:07:24.329 ************************************ 00:07:24.329 20:54:15 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:24.329 [2024-07-13 20:54:15.062430] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:24.329 [2024-07-13 20:54:15.062495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3373079 ] 00:07:24.329 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.329 [2024-07-13 20:54:15.133389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.329 [2024-07-13 20:54:15.171439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.329 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.588 20:54:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:25.526 20:54:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.526 00:07:25.526 real 0m1.304s 00:07:25.526 user 0m1.181s 00:07:25.526 sys 0m0.128s 00:07:25.526 20:54:16 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.526 20:54:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:25.526 ************************************ 00:07:25.526 END TEST accel_copy 00:07:25.526 ************************************ 00:07:25.526 20:54:16 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.526 20:54:16 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:25.526 20:54:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.526 20:54:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.526 ************************************ 00:07:25.526 START TEST accel_fill 00:07:25.526 ************************************ 00:07:25.526 20:54:16 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.526 20:54:16 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:25.786 [2024-07-13 20:54:16.435914] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:25.786 [2024-07-13 20:54:16.435973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3373287 ] 00:07:25.786 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.786 [2024-07-13 20:54:16.507321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.786 [2024-07-13 20:54:16.545895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.786 20:54:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:27.165 20:54:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.165 00:07:27.165 real 0m1.303s 00:07:27.165 user 0m1.181s 00:07:27.165 sys 0m0.128s 00:07:27.165 20:54:17 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.165 20:54:17 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:27.165 ************************************ 00:07:27.165 END TEST accel_fill 00:07:27.165 ************************************ 00:07:27.165 20:54:17 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:27.165 20:54:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:27.165 20:54:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.165 20:54:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.165 ************************************ 00:07:27.165 START TEST accel_copy_crc32c 00:07:27.165 ************************************ 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:27.165 [2024-07-13 20:54:17.806755] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:27.165 [2024-07-13 20:54:17.806824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3373492 ] 00:07:27.165 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.165 [2024-07-13 20:54:17.877917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.165 [2024-07-13 20:54:17.915344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:27.165 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.166 20:54:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.546 00:07:28.546 real 0m1.300s 00:07:28.546 user 0m1.176s 00:07:28.546 sys 0m0.130s 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.546 20:54:19 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:28.546 ************************************ 00:07:28.546 END TEST accel_copy_crc32c 00:07:28.546 ************************************ 00:07:28.546 20:54:19 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.546 20:54:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:28.546 20:54:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.546 20:54:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.546 ************************************ 00:07:28.546 START TEST accel_copy_crc32c_C2 00:07:28.546 ************************************ 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:28.546 [2024-07-13 20:54:19.184416] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:28.546 [2024-07-13 20:54:19.184492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3373771 ] 00:07:28.546 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.546 [2024-07-13 20:54:19.256151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.546 [2024-07-13 20:54:19.293255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.546 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.547 20:54:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.922 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.923 00:07:29.923 real 0m1.308s 00:07:29.923 user 0m1.184s 00:07:29.923 sys 0m0.130s 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.923 20:54:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:29.923 ************************************ 00:07:29.923 END TEST accel_copy_crc32c_C2 00:07:29.923 ************************************ 00:07:29.923 20:54:20 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:29.923 20:54:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:29.923 20:54:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.923 20:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.923 ************************************ 00:07:29.923 START TEST accel_dualcast 00:07:29.923 ************************************ 00:07:29.923 20:54:20 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:29.923 [2024-07-13 20:54:20.571645] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:29.923 [2024-07-13 20:54:20.571703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3374053 ] 00:07:29.923 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.923 [2024-07-13 20:54:20.644839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.923 [2024-07-13 20:54:20.684172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.923 20:54:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:31.300 20:54:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.300 00:07:31.300 real 0m1.311s 00:07:31.300 user 0m1.181s 00:07:31.300 sys 0m0.135s 00:07:31.300 20:54:21 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.300 20:54:21 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:31.300 ************************************ 00:07:31.300 END TEST accel_dualcast 00:07:31.300 ************************************ 00:07:31.300 20:54:21 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:31.300 20:54:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:31.300 20:54:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.300 20:54:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.300 ************************************ 00:07:31.300 START TEST accel_compare 00:07:31.300 ************************************ 00:07:31.300 20:54:21 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:31.300 20:54:21 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:31.300 [2024-07-13 20:54:21.942219] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:31.300 [2024-07-13 20:54:21.942275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3374335 ] 00:07:31.300 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.300 [2024-07-13 20:54:22.012932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.300 [2024-07-13 20:54:22.050804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.300 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.301 20:54:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:32.676 20:54:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.676 00:07:32.676 real 0m1.291s 00:07:32.676 user 0m1.169s 00:07:32.676 sys 0m0.128s 00:07:32.676 20:54:23 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.676 20:54:23 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:32.676 ************************************ 00:07:32.676 END TEST accel_compare 00:07:32.676 ************************************ 00:07:32.676 20:54:23 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:32.676 20:54:23 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:32.676 20:54:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.676 20:54:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.676 ************************************ 00:07:32.676 START TEST accel_xor 00:07:32.676 ************************************ 00:07:32.676 20:54:23 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.676 20:54:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:32.677 [2024-07-13 20:54:23.315729] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:32.677 [2024-07-13 20:54:23.315790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3374623 ] 00:07:32.677 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.677 [2024-07-13 20:54:23.386185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.677 [2024-07-13 20:54:23.423443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.677 20:54:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.053 00:07:34.053 real 0m1.300s 00:07:34.053 user 0m1.179s 00:07:34.053 sys 0m0.126s 00:07:34.053 20:54:24 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.053 20:54:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:34.053 ************************************ 00:07:34.053 END TEST accel_xor 00:07:34.053 ************************************ 00:07:34.053 20:54:24 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:34.053 20:54:24 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:34.053 20:54:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.053 20:54:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.053 ************************************ 00:07:34.053 START TEST accel_xor 00:07:34.053 ************************************ 00:07:34.053 20:54:24 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:34.053 [2024-07-13 20:54:24.673536] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:34.053 [2024-07-13 20:54:24.673611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3374903 ] 00:07:34.053 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.053 [2024-07-13 20:54:24.743999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.053 [2024-07-13 20:54:24.781090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.053 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.054 20:54:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:35.432 20:54:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.432 00:07:35.432 real 0m1.302s 00:07:35.432 user 0m1.180s 00:07:35.432 sys 0m0.127s 00:07:35.432 20:54:25 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.432 20:54:25 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:35.432 ************************************ 00:07:35.432 END TEST accel_xor 00:07:35.432 ************************************ 00:07:35.432 20:54:25 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:35.432 20:54:25 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:35.432 20:54:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.432 20:54:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.432 ************************************ 00:07:35.432 START TEST accel_dif_verify 00:07:35.432 ************************************ 00:07:35.432 20:54:26 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:35.432 [2024-07-13 20:54:26.039084] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:35.432 [2024-07-13 20:54:26.039159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375166 ] 00:07:35.432 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.432 [2024-07-13 20:54:26.110051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.432 [2024-07-13 20:54:26.147204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.432 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.433 20:54:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:36.841 20:54:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.841 00:07:36.841 real 0m1.304s 00:07:36.841 user 0m1.176s 00:07:36.841 sys 0m0.134s 00:07:36.841 20:54:27 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.841 20:54:27 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:36.841 ************************************ 00:07:36.841 END TEST accel_dif_verify 00:07:36.841 ************************************ 00:07:36.841 20:54:27 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:36.841 20:54:27 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:36.841 20:54:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.841 20:54:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.841 ************************************ 00:07:36.841 START TEST accel_dif_generate 00:07:36.841 ************************************ 00:07:36.841 20:54:27 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:36.841 [2024-07-13 20:54:27.410824] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:36.841 [2024-07-13 20:54:27.410889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375356 ] 00:07:36.841 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.841 [2024-07-13 20:54:27.482855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.841 [2024-07-13 20:54:27.520622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.841 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.842 20:54:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:38.221 20:54:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.221 00:07:38.221 real 0m1.304s 00:07:38.221 user 0m1.180s 00:07:38.221 sys 0m0.131s 00:07:38.221 20:54:28 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.221 20:54:28 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:38.221 ************************************ 00:07:38.221 END TEST accel_dif_generate 00:07:38.221 ************************************ 00:07:38.221 20:54:28 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:38.221 20:54:28 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:38.221 20:54:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.221 20:54:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.221 ************************************ 00:07:38.221 START TEST accel_dif_generate_copy 00:07:38.221 ************************************ 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:38.221 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:38.221 [2024-07-13 20:54:28.780567] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:38.221 [2024-07-13 20:54:28.780615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375554 ] 00:07:38.221 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.222 [2024-07-13 20:54:28.849245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.222 [2024-07-13 20:54:28.886596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.222 20:54:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.603 00:07:39.603 real 0m1.297s 00:07:39.603 user 0m1.176s 00:07:39.603 sys 0m0.127s 00:07:39.603 20:54:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.604 20:54:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.604 ************************************ 00:07:39.604 END TEST accel_dif_generate_copy 00:07:39.604 ************************************ 00:07:39.604 20:54:30 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:39.604 20:54:30 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:39.604 20:54:30 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:39.604 20:54:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.604 20:54:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.604 ************************************ 00:07:39.604 START TEST accel_comp 00:07:39.604 ************************************ 00:07:39.604 20:54:30 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:39.604 [2024-07-13 20:54:30.156593] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:39.604 [2024-07-13 20:54:30.156649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375798 ] 00:07:39.604 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.604 [2024-07-13 20:54:30.229696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.604 [2024-07-13 20:54:30.268177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.604 20:54:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 20:54:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:40.985 20:54:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.985 00:07:40.985 real 0m1.307s 00:07:40.985 user 0m1.186s 00:07:40.985 sys 0m0.127s 00:07:40.985 20:54:31 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.985 20:54:31 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:40.985 ************************************ 00:07:40.986 END TEST accel_comp 00:07:40.986 ************************************ 00:07:40.986 20:54:31 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:40.986 20:54:31 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:40.986 20:54:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.986 20:54:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.986 ************************************ 00:07:40.986 START TEST accel_decomp 00:07:40.986 ************************************ 00:07:40.986 20:54:31 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:40.986 [2024-07-13 20:54:31.521736] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:40.986 [2024-07-13 20:54:31.521779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376078 ] 00:07:40.986 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.986 [2024-07-13 20:54:31.590124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.986 [2024-07-13 20:54:31.627253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.986 20:54:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.923 20:54:32 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.923 00:07:41.923 real 0m1.289s 00:07:41.923 user 0m1.180s 00:07:41.923 sys 0m0.114s 00:07:41.923 20:54:32 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.923 20:54:32 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:41.923 ************************************ 00:07:41.923 END TEST accel_decomp 00:07:41.923 ************************************ 00:07:42.181 20:54:32 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.181 20:54:32 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:42.181 20:54:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.181 20:54:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.181 ************************************ 00:07:42.181 START TEST accel_decmop_full 00:07:42.181 ************************************ 00:07:42.181 20:54:32 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.181 20:54:32 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.182 20:54:32 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.182 20:54:32 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.182 20:54:32 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:42.182 20:54:32 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:42.182 [2024-07-13 20:54:32.895276] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:42.182 [2024-07-13 20:54:32.895333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376359 ] 00:07:42.182 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.182 [2024-07-13 20:54:32.964541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.182 [2024-07-13 20:54:33.002399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.182 20:54:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.558 20:54:34 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.558 00:07:43.558 real 0m1.313s 00:07:43.558 user 0m1.189s 00:07:43.558 sys 0m0.128s 00:07:43.558 20:54:34 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.558 20:54:34 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:43.558 ************************************ 00:07:43.558 END TEST accel_decmop_full 00:07:43.558 ************************************ 00:07:43.558 20:54:34 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.558 20:54:34 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:43.558 20:54:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.558 20:54:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.558 ************************************ 00:07:43.558 START TEST accel_decomp_mcore 00:07:43.558 ************************************ 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.558 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:43.559 [2024-07-13 20:54:34.269793] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:43.559 [2024-07-13 20:54:34.269849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376644 ] 00:07:43.559 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.559 [2024-07-13 20:54:34.338636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.559 [2024-07-13 20:54:34.379472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.559 [2024-07-13 20:54:34.379568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.559 [2024-07-13 20:54:34.379651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.559 [2024-07-13 20:54:34.379652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.559 20:54:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.935 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.936 00:07:44.936 real 0m1.314s 00:07:44.936 user 0m4.523s 00:07:44.936 sys 0m0.135s 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.936 20:54:35 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:44.936 ************************************ 00:07:44.936 END TEST accel_decomp_mcore 00:07:44.936 ************************************ 00:07:44.936 20:54:35 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.936 20:54:35 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:44.936 20:54:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.936 20:54:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.936 ************************************ 00:07:44.936 START TEST accel_decomp_full_mcore 00:07:44.936 ************************************ 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:44.936 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:44.936 [2024-07-13 20:54:35.671727] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:44.936 [2024-07-13 20:54:35.671797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376932 ] 00:07:44.936 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.936 [2024-07-13 20:54:35.742759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.936 [2024-07-13 20:54:35.783050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.936 [2024-07-13 20:54:35.783147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.936 [2024-07-13 20:54:35.783209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.936 [2024-07-13 20:54:35.783211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.194 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.194 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.194 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.194 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.194 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.194 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.194 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.194 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.195 20:54:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.131 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.132 00:07:46.132 real 0m1.333s 00:07:46.132 user 0m4.552s 00:07:46.132 sys 0m0.150s 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.132 20:54:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:46.132 ************************************ 00:07:46.132 END TEST accel_decomp_full_mcore 00:07:46.132 ************************************ 00:07:46.132 20:54:37 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:46.132 20:54:37 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:46.132 20:54:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.132 20:54:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.391 ************************************ 00:07:46.391 START TEST accel_decomp_mthread 00:07:46.391 ************************************ 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:46.391 [2024-07-13 20:54:37.082377] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:46.391 [2024-07-13 20:54:37.082435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377219 ] 00:07:46.391 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.391 [2024-07-13 20:54:37.150810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.391 [2024-07-13 20:54:37.188164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:46.391 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.392 20:54:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.791 00:07:47.791 real 0m1.310s 00:07:47.791 user 0m1.195s 00:07:47.791 sys 0m0.130s 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.791 20:54:38 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:47.791 ************************************ 00:07:47.791 END TEST accel_decomp_mthread 00:07:47.791 ************************************ 00:07:47.791 20:54:38 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.791 20:54:38 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:47.791 20:54:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.791 20:54:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.791 ************************************ 00:07:47.791 START TEST accel_decomp_full_mthread 00:07:47.791 ************************************ 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:47.791 [2024-07-13 20:54:38.474460] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:47.791 [2024-07-13 20:54:38.474545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377506 ] 00:07:47.791 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.791 [2024-07-13 20:54:38.543238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.791 [2024-07-13 20:54:38.580811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.791 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.792 20:54:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.168 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.169 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.169 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.169 20:54:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.169 00:07:49.169 real 0m1.334s 00:07:49.169 user 0m1.220s 00:07:49.169 sys 0m0.128s 00:07:49.169 20:54:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.169 20:54:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:49.169 ************************************ 00:07:49.169 END TEST accel_decomp_full_mthread 00:07:49.169 ************************************ 00:07:49.169 20:54:39 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:49.169 20:54:39 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:49.169 20:54:39 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:49.169 20:54:39 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:49.169 20:54:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.169 20:54:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.169 20:54:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.169 20:54:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.169 20:54:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.169 20:54:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.169 20:54:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.169 20:54:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:49.169 20:54:39 accel -- accel/accel.sh@41 -- # jq -r . 00:07:49.169 ************************************ 00:07:49.169 START TEST accel_dif_functional_tests 00:07:49.169 ************************************ 00:07:49.169 20:54:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:49.169 [2024-07-13 20:54:39.906862] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:49.169 [2024-07-13 20:54:39.906904] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377757 ] 00:07:49.169 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.169 [2024-07-13 20:54:39.973952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.169 [2024-07-13 20:54:40.015289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.169 [2024-07-13 20:54:40.015385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.169 [2024-07-13 20:54:40.015388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.428 00:07:49.428 00:07:49.428 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.428 http://cunit.sourceforge.net/ 00:07:49.428 00:07:49.428 00:07:49.428 Suite: accel_dif 00:07:49.428 Test: verify: DIF generated, GUARD check ...passed 00:07:49.428 Test: verify: DIF generated, APPTAG check ...passed 00:07:49.428 Test: verify: DIF generated, REFTAG check ...passed 00:07:49.428 Test: verify: DIF not generated, GUARD check ...[2024-07-13 20:54:40.080801] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:49.428 passed 00:07:49.428 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 20:54:40.080854] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:49.428 passed 00:07:49.428 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 20:54:40.080877] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:49.428 passed 00:07:49.428 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:49.428 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 20:54:40.080924] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:49.428 passed 00:07:49.428 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:49.428 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:49.428 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:49.428 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 20:54:40.081032] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:49.428 passed 00:07:49.428 Test: verify copy: DIF generated, GUARD check ...passed 00:07:49.428 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:49.428 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:49.428 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 20:54:40.081140] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:49.428 passed 00:07:49.428 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 20:54:40.081164] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:49.428 passed 00:07:49.428 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 20:54:40.081187] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:49.428 passed 00:07:49.428 Test: generate copy: DIF generated, GUARD check ...passed 00:07:49.428 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:49.428 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:49.428 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:49.428 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:49.428 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:49.428 Test: generate copy: iovecs-len validate ...[2024-07-13 20:54:40.081356] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:49.428 passed 00:07:49.428 Test: generate copy: buffer alignment validate ...passed 00:07:49.428 00:07:49.428 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.428 suites 1 1 n/a 0 0 00:07:49.428 tests 26 26 26 0 0 00:07:49.428 asserts 115 115 115 0 n/a 00:07:49.428 00:07:49.428 Elapsed time = 0.002 seconds 00:07:49.428 00:07:49.428 real 0m0.378s 00:07:49.428 user 0m0.552s 00:07:49.428 sys 0m0.172s 00:07:49.428 20:54:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.428 20:54:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:49.428 ************************************ 00:07:49.428 END TEST accel_dif_functional_tests 00:07:49.428 ************************************ 00:07:49.428 00:07:49.428 real 0m30.008s 00:07:49.428 user 0m32.986s 00:07:49.428 sys 0m4.861s 00:07:49.428 20:54:40 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.428 20:54:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.428 ************************************ 00:07:49.428 END TEST accel 00:07:49.428 ************************************ 00:07:49.688 20:54:40 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:49.688 20:54:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.688 20:54:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.688 20:54:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.688 ************************************ 00:07:49.688 START TEST accel_rpc 00:07:49.688 ************************************ 00:07:49.688 20:54:40 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:49.688 * Looking for test storage... 00:07:49.688 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:49.688 20:54:40 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:49.688 20:54:40 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3377858 00:07:49.688 20:54:40 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3377858 00:07:49.688 20:54:40 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3377858 ']' 00:07:49.688 20:54:40 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.688 20:54:40 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.688 20:54:40 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.688 20:54:40 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.688 20:54:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.688 20:54:40 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:49.688 [2024-07-13 20:54:40.471757] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:49.688 [2024-07-13 20:54:40.471817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377858 ] 00:07:49.688 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.688 [2024-07-13 20:54:40.540113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.946 [2024-07-13 20:54:40.580571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.515 20:54:41 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:50.515 20:54:41 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:50.515 20:54:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:50.515 20:54:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:50.515 20:54:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:50.515 20:54:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:50.515 20:54:41 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:50.515 20:54:41 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:50.515 20:54:41 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.515 20:54:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.515 ************************************ 00:07:50.515 START TEST accel_assign_opcode 00:07:50.515 ************************************ 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.515 [2024-07-13 20:54:41.282675] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.515 [2024-07-13 20:54:41.290685] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.515 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.775 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.775 20:54:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:50.775 20:54:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:50.775 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.775 20:54:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:50.775 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.775 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.775 software 00:07:50.775 00:07:50.775 real 0m0.225s 00:07:50.775 user 0m0.042s 00:07:50.775 sys 0m0.011s 00:07:50.775 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.775 20:54:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:50.775 ************************************ 00:07:50.775 END TEST accel_assign_opcode 00:07:50.775 ************************************ 00:07:50.775 20:54:41 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3377858 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3377858 ']' 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3377858 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3377858 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3377858' 00:07:50.775 killing process with pid 3377858 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@965 -- # kill 3377858 00:07:50.775 20:54:41 accel_rpc -- common/autotest_common.sh@970 -- # wait 3377858 00:07:51.035 00:07:51.035 real 0m1.541s 00:07:51.035 user 0m1.584s 00:07:51.035 sys 0m0.439s 00:07:51.035 20:54:41 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.035 20:54:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.035 ************************************ 00:07:51.035 END TEST accel_rpc 00:07:51.035 ************************************ 00:07:51.295 20:54:41 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:51.295 20:54:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:51.295 20:54:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.295 20:54:41 -- common/autotest_common.sh@10 -- # set +x 00:07:51.295 ************************************ 00:07:51.295 START TEST app_cmdline 00:07:51.295 ************************************ 00:07:51.295 20:54:41 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:51.295 * Looking for test storage... 00:07:51.295 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:51.295 20:54:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:51.295 20:54:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3378195 00:07:51.295 20:54:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3378195 00:07:51.295 20:54:42 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3378195 ']' 00:07:51.295 20:54:42 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.295 20:54:42 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:51.295 20:54:42 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.295 20:54:42 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.295 20:54:42 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.295 20:54:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:51.295 [2024-07-13 20:54:42.129796] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:51.295 [2024-07-13 20:54:42.129851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378195 ] 00:07:51.295 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.555 [2024-07-13 20:54:42.198898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.555 [2024-07-13 20:54:42.238929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.133 20:54:42 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.133 20:54:42 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:52.133 20:54:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:52.395 { 00:07:52.395 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:52.395 "fields": { 00:07:52.395 "major": 24, 00:07:52.395 "minor": 5, 00:07:52.395 "patch": 1, 00:07:52.395 "suffix": "-pre", 00:07:52.395 "commit": "5fa2f5086" 00:07:52.395 } 00:07:52.395 } 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:52.395 20:54:43 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.395 20:54:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:52.395 20:54:43 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:52.395 20:54:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.395 20:54:43 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:52.395 20:54:43 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:52.396 request: 00:07:52.396 { 00:07:52.396 "method": "env_dpdk_get_mem_stats", 00:07:52.396 "req_id": 1 00:07:52.396 } 00:07:52.396 Got JSON-RPC error response 00:07:52.396 response: 00:07:52.396 { 00:07:52.396 "code": -32601, 00:07:52.396 "message": "Method not found" 00:07:52.396 } 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.396 20:54:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3378195 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3378195 ']' 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3378195 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:52.396 20:54:43 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:52.654 20:54:43 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3378195 00:07:52.654 20:54:43 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:52.654 20:54:43 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:52.654 20:54:43 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3378195' 00:07:52.654 killing process with pid 3378195 00:07:52.654 20:54:43 app_cmdline -- common/autotest_common.sh@965 -- # kill 3378195 00:07:52.654 20:54:43 app_cmdline -- common/autotest_common.sh@970 -- # wait 3378195 00:07:52.913 00:07:52.913 real 0m1.654s 00:07:52.913 user 0m1.920s 00:07:52.913 sys 0m0.469s 00:07:52.913 20:54:43 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.913 20:54:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:52.913 ************************************ 00:07:52.913 END TEST app_cmdline 00:07:52.913 ************************************ 00:07:52.913 20:54:43 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:52.913 20:54:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:52.913 20:54:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.913 20:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:52.913 ************************************ 00:07:52.913 START TEST version 00:07:52.913 ************************************ 00:07:52.913 20:54:43 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:52.913 * Looking for test storage... 00:07:53.172 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:53.172 20:54:43 version -- app/version.sh@17 -- # get_header_version major 00:07:53.172 20:54:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:53.172 20:54:43 version -- app/version.sh@14 -- # cut -f2 00:07:53.172 20:54:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.172 20:54:43 version -- app/version.sh@17 -- # major=24 00:07:53.172 20:54:43 version -- app/version.sh@18 -- # get_header_version minor 00:07:53.172 20:54:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:53.172 20:54:43 version -- app/version.sh@14 -- # cut -f2 00:07:53.172 20:54:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.172 20:54:43 version -- app/version.sh@18 -- # minor=5 00:07:53.172 20:54:43 version -- app/version.sh@19 -- # get_header_version patch 00:07:53.172 20:54:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:53.172 20:54:43 version -- app/version.sh@14 -- # cut -f2 00:07:53.172 20:54:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.172 20:54:43 version -- app/version.sh@19 -- # patch=1 00:07:53.172 20:54:43 version -- app/version.sh@20 -- # get_header_version suffix 00:07:53.172 20:54:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:53.172 20:54:43 version -- app/version.sh@14 -- # cut -f2 00:07:53.172 20:54:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:53.172 20:54:43 version -- app/version.sh@20 -- # suffix=-pre 00:07:53.172 20:54:43 version -- app/version.sh@22 -- # version=24.5 00:07:53.172 20:54:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:53.172 20:54:43 version -- app/version.sh@25 -- # version=24.5.1 00:07:53.172 20:54:43 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:53.172 20:54:43 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:53.172 20:54:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:53.172 20:54:43 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:53.172 20:54:43 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:53.172 00:07:53.172 real 0m0.181s 00:07:53.172 user 0m0.103s 00:07:53.172 sys 0m0.123s 00:07:53.172 20:54:43 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.172 20:54:43 version -- common/autotest_common.sh@10 -- # set +x 00:07:53.172 ************************************ 00:07:53.172 END TEST version 00:07:53.172 ************************************ 00:07:53.172 20:54:43 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:53.172 20:54:43 -- spdk/autotest.sh@198 -- # uname -s 00:07:53.172 20:54:43 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:53.172 20:54:43 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:53.172 20:54:43 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:53.172 20:54:43 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:53.172 20:54:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:53.172 20:54:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:53.172 20:54:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.172 20:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.172 20:54:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:53.172 20:54:43 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:53.172 20:54:43 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:53.172 20:54:43 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:53.172 20:54:43 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:07:53.172 20:54:43 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:53.172 20:54:43 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:53.172 20:54:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.172 20:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.172 ************************************ 00:07:53.172 START TEST nvmf_rdma 00:07:53.172 ************************************ 00:07:53.172 20:54:44 nvmf_rdma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:53.432 * Looking for test storage... 00:07:53.432 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:53.432 20:54:44 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.432 20:54:44 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.432 20:54:44 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.432 20:54:44 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.432 20:54:44 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.432 20:54:44 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.432 20:54:44 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:07:53.432 20:54:44 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:53.432 20:54:44 nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:53.432 20:54:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:53.432 20:54:44 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:53.432 20:54:44 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:53.432 20:54:44 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.433 20:54:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:53.433 ************************************ 00:07:53.433 START TEST nvmf_example 00:07:53.433 ************************************ 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:53.433 * Looking for test storage... 00:07:53.433 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:53.433 20:54:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:53.692 20:54:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:00.305 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.305 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.305 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:00.306 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:00.306 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:00.306 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:00.306 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:00.306 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:00.307 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:00.307 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:00.307 altname enp217s0f0np0 00:08:00.307 altname ens818f0np0 00:08:00.307 inet 192.168.100.8/24 scope global mlx_0_0 00:08:00.307 valid_lft forever preferred_lft forever 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:00.307 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:00.307 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:00.307 altname enp217s0f1np1 00:08:00.307 altname ens818f1np1 00:08:00.307 inet 192.168.100.9/24 scope global mlx_0_1 00:08:00.307 valid_lft forever preferred_lft forever 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:00.307 192.168.100.9' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:00.307 192.168.100.9' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:00.307 192.168.100.9' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3381967 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3381967 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3381967 ']' 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:00.307 20:54:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:00.307 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.876 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.876 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:08:00.876 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:00.876 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.876 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:00.876 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:00.876 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.876 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.135 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:01.136 20:54:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:01.136 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.351 Initializing NVMe Controllers 00:08:13.351 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:13.351 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:13.351 Initialization complete. Launching workers. 00:08:13.351 ======================================================== 00:08:13.351 Latency(us) 00:08:13.351 Device Information : IOPS MiB/s Average min max 00:08:13.351 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24801.94 96.88 2580.07 619.07 13020.86 00:08:13.351 ======================================================== 00:08:13.351 Total : 24801.94 96.88 2580.07 619.07 13020.86 00:08:13.351 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:13.351 rmmod nvme_rdma 00:08:13.351 rmmod nvme_fabrics 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3381967 ']' 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3381967 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3381967 ']' 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3381967 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3381967 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3381967' 00:08:13.351 killing process with pid 3381967 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@965 -- # kill 3381967 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@970 -- # wait 3381967 00:08:13.351 nvmf threads initialize successfully 00:08:13.351 bdev subsystem init successfully 00:08:13.351 created a nvmf target service 00:08:13.351 create targets's poll groups done 00:08:13.351 all subsystems of target started 00:08:13.351 nvmf target is running 00:08:13.351 all subsystems of target stopped 00:08:13.351 destroy targets's poll groups done 00:08:13.351 destroyed the nvmf target service 00:08:13.351 bdev subsystem finish successfully 00:08:13.351 nvmf threads destroy successfully 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 00:08:13.351 real 0m19.284s 00:08:13.351 user 0m52.080s 00:08:13.351 sys 0m5.348s 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.351 20:55:03 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 ************************************ 00:08:13.351 END TEST nvmf_example 00:08:13.351 ************************************ 00:08:13.352 20:55:03 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:13.352 20:55:03 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:13.352 20:55:03 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.352 20:55:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:13.352 ************************************ 00:08:13.352 START TEST nvmf_filesystem 00:08:13.352 ************************************ 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:13.352 * Looking for test storage... 00:08:13.352 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:13.352 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:13.353 #define SPDK_CONFIG_H 00:08:13.353 #define SPDK_CONFIG_APPS 1 00:08:13.353 #define SPDK_CONFIG_ARCH native 00:08:13.353 #undef SPDK_CONFIG_ASAN 00:08:13.353 #undef SPDK_CONFIG_AVAHI 00:08:13.353 #undef SPDK_CONFIG_CET 00:08:13.353 #define SPDK_CONFIG_COVERAGE 1 00:08:13.353 #define SPDK_CONFIG_CROSS_PREFIX 00:08:13.353 #undef SPDK_CONFIG_CRYPTO 00:08:13.353 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:13.353 #undef SPDK_CONFIG_CUSTOMOCF 00:08:13.353 #undef SPDK_CONFIG_DAOS 00:08:13.353 #define SPDK_CONFIG_DAOS_DIR 00:08:13.353 #define SPDK_CONFIG_DEBUG 1 00:08:13.353 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:13.353 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:13.353 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:13.353 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:13.353 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:13.353 #undef SPDK_CONFIG_DPDK_UADK 00:08:13.353 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:13.353 #define SPDK_CONFIG_EXAMPLES 1 00:08:13.353 #undef SPDK_CONFIG_FC 00:08:13.353 #define SPDK_CONFIG_FC_PATH 00:08:13.353 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:13.353 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:13.353 #undef SPDK_CONFIG_FUSE 00:08:13.353 #undef SPDK_CONFIG_FUZZER 00:08:13.353 #define SPDK_CONFIG_FUZZER_LIB 00:08:13.353 #undef SPDK_CONFIG_GOLANG 00:08:13.353 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:13.353 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:13.353 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:13.353 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:13.353 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:13.353 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:13.353 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:13.353 #define SPDK_CONFIG_IDXD 1 00:08:13.353 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:13.353 #undef SPDK_CONFIG_IPSEC_MB 00:08:13.353 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:13.353 #define SPDK_CONFIG_ISAL 1 00:08:13.353 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:13.353 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:13.353 #define SPDK_CONFIG_LIBDIR 00:08:13.353 #undef SPDK_CONFIG_LTO 00:08:13.353 #define SPDK_CONFIG_MAX_LCORES 00:08:13.353 #define SPDK_CONFIG_NVME_CUSE 1 00:08:13.353 #undef SPDK_CONFIG_OCF 00:08:13.353 #define SPDK_CONFIG_OCF_PATH 00:08:13.353 #define SPDK_CONFIG_OPENSSL_PATH 00:08:13.353 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:13.353 #define SPDK_CONFIG_PGO_DIR 00:08:13.353 #undef SPDK_CONFIG_PGO_USE 00:08:13.353 #define SPDK_CONFIG_PREFIX /usr/local 00:08:13.353 #undef SPDK_CONFIG_RAID5F 00:08:13.353 #undef SPDK_CONFIG_RBD 00:08:13.353 #define SPDK_CONFIG_RDMA 1 00:08:13.353 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:13.353 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:13.353 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:13.353 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:13.353 #define SPDK_CONFIG_SHARED 1 00:08:13.353 #undef SPDK_CONFIG_SMA 00:08:13.353 #define SPDK_CONFIG_TESTS 1 00:08:13.353 #undef SPDK_CONFIG_TSAN 00:08:13.353 #define SPDK_CONFIG_UBLK 1 00:08:13.353 #define SPDK_CONFIG_UBSAN 1 00:08:13.353 #undef SPDK_CONFIG_UNIT_TESTS 00:08:13.353 #undef SPDK_CONFIG_URING 00:08:13.353 #define SPDK_CONFIG_URING_PATH 00:08:13.353 #undef SPDK_CONFIG_URING_ZNS 00:08:13.353 #undef SPDK_CONFIG_USDT 00:08:13.353 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:13.353 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:13.353 #undef SPDK_CONFIG_VFIO_USER 00:08:13.353 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:13.353 #define SPDK_CONFIG_VHOST 1 00:08:13.353 #define SPDK_CONFIG_VIRTIO 1 00:08:13.353 #undef SPDK_CONFIG_VTUNE 00:08:13.353 #define SPDK_CONFIG_VTUNE_DIR 00:08:13.353 #define SPDK_CONFIG_WERROR 1 00:08:13.353 #define SPDK_CONFIG_WPDK_DIR 00:08:13.353 #undef SPDK_CONFIG_XNVME 00:08:13.353 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:08:13.353 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # : rdma 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # : mlx5 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:13.354 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3384222 ]] 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3384222 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.xj5bt4 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xj5bt4/tests/target /tmp/spdk.xj5bt4 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:13.355 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=951066624 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4333363200 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=54350884864 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742268416 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=7391383552 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30867759104 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871134208 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12339036160 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348456960 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9420800 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30870867968 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871134208 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=266240 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6174220288 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174224384 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:13.356 * Looking for test storage... 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=54350884864 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=9605976064 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.356 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:13.356 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.357 20:55:03 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:19.927 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:19.927 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.927 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:19.927 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:19.928 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:19.928 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:19.928 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:19.928 altname enp217s0f0np0 00:08:19.928 altname ens818f0np0 00:08:19.928 inet 192.168.100.8/24 scope global mlx_0_0 00:08:19.928 valid_lft forever preferred_lft forever 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:19.928 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:19.928 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:19.928 altname enp217s0f1np1 00:08:19.928 altname ens818f1np1 00:08:19.928 inet 192.168.100.9/24 scope global mlx_0_1 00:08:19.928 valid_lft forever preferred_lft forever 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:19.928 192.168.100.9' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:19.928 192.168.100.9' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:19.928 192.168.100.9' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:19.928 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.929 ************************************ 00:08:19.929 START TEST nvmf_filesystem_no_in_capsule 00:08:19.929 ************************************ 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3387379 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3387379 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3387379 ']' 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:19.929 20:55:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.929 [2024-07-13 20:55:10.412998] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:19.929 [2024-07-13 20:55:10.413052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.929 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.929 [2024-07-13 20:55:10.486172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.929 [2024-07-13 20:55:10.528720] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.929 [2024-07-13 20:55:10.528764] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.929 [2024-07-13 20:55:10.528773] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.929 [2024-07-13 20:55:10.528782] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.929 [2024-07-13 20:55:10.528789] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.929 [2024-07-13 20:55:10.528840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.929 [2024-07-13 20:55:10.528935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.929 [2024-07-13 20:55:10.529031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.929 [2024-07-13 20:55:10.529034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.497 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.497 [2024-07-13 20:55:11.273993] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:20.497 [2024-07-13 20:55:11.296693] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbbfc80/0xbc4170) succeed. 00:08:20.497 [2024-07-13 20:55:11.306946] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbc12c0/0xc05800) succeed. 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.757 Malloc1 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.757 [2024-07-13 20:55:11.552448] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.757 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:20.757 { 00:08:20.757 "name": "Malloc1", 00:08:20.757 "aliases": [ 00:08:20.757 "f2378a08-176a-4576-a051-5024914fa58e" 00:08:20.757 ], 00:08:20.757 "product_name": "Malloc disk", 00:08:20.757 "block_size": 512, 00:08:20.757 "num_blocks": 1048576, 00:08:20.757 "uuid": "f2378a08-176a-4576-a051-5024914fa58e", 00:08:20.757 "assigned_rate_limits": { 00:08:20.757 "rw_ios_per_sec": 0, 00:08:20.757 "rw_mbytes_per_sec": 0, 00:08:20.757 "r_mbytes_per_sec": 0, 00:08:20.757 "w_mbytes_per_sec": 0 00:08:20.757 }, 00:08:20.757 "claimed": true, 00:08:20.757 "claim_type": "exclusive_write", 00:08:20.757 "zoned": false, 00:08:20.757 "supported_io_types": { 00:08:20.757 "read": true, 00:08:20.757 "write": true, 00:08:20.757 "unmap": true, 00:08:20.757 "write_zeroes": true, 00:08:20.757 "flush": true, 00:08:20.757 "reset": true, 00:08:20.757 "compare": false, 00:08:20.757 "compare_and_write": false, 00:08:20.757 "abort": true, 00:08:20.757 "nvme_admin": false, 00:08:20.757 "nvme_io": false 00:08:20.757 }, 00:08:20.757 "memory_domains": [ 00:08:20.757 { 00:08:20.758 "dma_device_id": "system", 00:08:20.758 "dma_device_type": 1 00:08:20.758 }, 00:08:20.758 { 00:08:20.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.758 "dma_device_type": 2 00:08:20.758 } 00:08:20.758 ], 00:08:20.758 "driver_specific": {} 00:08:20.758 } 00:08:20.758 ]' 00:08:20.758 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:20.758 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:20.758 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:21.017 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:21.017 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:21.017 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:21.017 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:21.017 20:55:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:21.955 20:55:12 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.955 20:55:12 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:21.955 20:55:12 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.955 20:55:12 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:21.955 20:55:12 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:23.857 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:24.116 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:24.116 20:55:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.054 ************************************ 00:08:25.054 START TEST filesystem_ext4 00:08:25.054 ************************************ 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:25.054 20:55:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:25.054 mke2fs 1.46.5 (30-Dec-2021) 00:08:25.314 Discarding device blocks: 0/522240 done 00:08:25.314 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:25.314 Filesystem UUID: f29f9e08-6174-4738-989c-6a0e6f53efaf 00:08:25.314 Superblock backups stored on blocks: 00:08:25.314 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:25.314 00:08:25.314 Allocating group tables: 0/64 done 00:08:25.314 Writing inode tables: 0/64 done 00:08:25.314 Creating journal (8192 blocks): done 00:08:25.314 Writing superblocks and filesystem accounting information: 0/64 done 00:08:25.314 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3387379 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.314 00:08:25.314 real 0m0.183s 00:08:25.314 user 0m0.023s 00:08:25.314 sys 0m0.075s 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:25.314 ************************************ 00:08:25.314 END TEST filesystem_ext4 00:08:25.314 ************************************ 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.314 ************************************ 00:08:25.314 START TEST filesystem_btrfs 00:08:25.314 ************************************ 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:25.314 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:25.574 btrfs-progs v6.6.2 00:08:25.574 See https://btrfs.readthedocs.io for more information. 00:08:25.574 00:08:25.574 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:25.574 NOTE: several default settings have changed in version 5.15, please make sure 00:08:25.574 this does not affect your deployments: 00:08:25.574 - DUP for metadata (-m dup) 00:08:25.574 - enabled no-holes (-O no-holes) 00:08:25.574 - enabled free-space-tree (-R free-space-tree) 00:08:25.574 00:08:25.574 Label: (null) 00:08:25.574 UUID: 64aa252e-cbe1-474d-9c99-4b4909a8647d 00:08:25.574 Node size: 16384 00:08:25.574 Sector size: 4096 00:08:25.574 Filesystem size: 510.00MiB 00:08:25.574 Block group profiles: 00:08:25.574 Data: single 8.00MiB 00:08:25.574 Metadata: DUP 32.00MiB 00:08:25.574 System: DUP 8.00MiB 00:08:25.574 SSD detected: yes 00:08:25.574 Zoned device: no 00:08:25.574 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:25.574 Runtime features: free-space-tree 00:08:25.574 Checksum: crc32c 00:08:25.574 Number of devices: 1 00:08:25.574 Devices: 00:08:25.574 ID SIZE PATH 00:08:25.574 1 510.00MiB /dev/nvme0n1p1 00:08:25.574 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3387379 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.574 00:08:25.574 real 0m0.261s 00:08:25.574 user 0m0.034s 00:08:25.574 sys 0m0.136s 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.574 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:25.575 ************************************ 00:08:25.575 END TEST filesystem_btrfs 00:08:25.575 ************************************ 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.834 ************************************ 00:08:25.834 START TEST filesystem_xfs 00:08:25.834 ************************************ 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:25.834 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:25.834 = sectsz=512 attr=2, projid32bit=1 00:08:25.834 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:25.834 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:25.834 data = bsize=4096 blocks=130560, imaxpct=25 00:08:25.834 = sunit=0 swidth=0 blks 00:08:25.834 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:25.834 log =internal log bsize=4096 blocks=16384, version=2 00:08:25.834 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:25.834 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:25.834 Discarding blocks...Done. 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3387379 00:08:25.834 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:26.094 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:26.094 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:26.094 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:26.094 00:08:26.094 real 0m0.217s 00:08:26.094 user 0m0.030s 00:08:26.094 sys 0m0.080s 00:08:26.094 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.094 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:26.094 ************************************ 00:08:26.094 END TEST filesystem_xfs 00:08:26.094 ************************************ 00:08:26.094 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:26.094 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:26.094 20:55:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.032 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3387379 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3387379 ']' 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3387379 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3387379 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3387379' 00:08:27.033 killing process with pid 3387379 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3387379 00:08:27.033 20:55:17 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3387379 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:27.602 00:08:27.602 real 0m7.897s 00:08:27.602 user 0m30.924s 00:08:27.602 sys 0m1.213s 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 ************************************ 00:08:27.602 END TEST nvmf_filesystem_no_in_capsule 00:08:27.602 ************************************ 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 ************************************ 00:08:27.602 START TEST nvmf_filesystem_in_capsule 00:08:27.602 ************************************ 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3388995 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3388995 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3388995 ']' 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:27.602 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 [2024-07-13 20:55:18.397105] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:27.602 [2024-07-13 20:55:18.397153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.602 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.602 [2024-07-13 20:55:18.469659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.861 [2024-07-13 20:55:18.508094] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.861 [2024-07-13 20:55:18.508137] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.861 [2024-07-13 20:55:18.508147] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.861 [2024-07-13 20:55:18.508156] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.861 [2024-07-13 20:55:18.508165] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.861 [2024-07-13 20:55:18.508208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.861 [2024-07-13 20:55:18.508304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.861 [2024-07-13 20:55:18.508391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.861 [2024-07-13 20:55:18.508392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.861 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.861 [2024-07-13 20:55:18.697991] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1786c80/0x178b170) succeed. 00:08:27.861 [2024-07-13 20:55:18.708454] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17882c0/0x17cc800) succeed. 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.121 Malloc1 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.121 [2024-07-13 20:55:18.975474] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.121 20:55:18 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.121 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.121 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:28.121 { 00:08:28.121 "name": "Malloc1", 00:08:28.121 "aliases": [ 00:08:28.121 "80925a9e-c1c2-419e-b85d-edf7b3bf2063" 00:08:28.121 ], 00:08:28.121 "product_name": "Malloc disk", 00:08:28.121 "block_size": 512, 00:08:28.121 "num_blocks": 1048576, 00:08:28.121 "uuid": "80925a9e-c1c2-419e-b85d-edf7b3bf2063", 00:08:28.121 "assigned_rate_limits": { 00:08:28.121 "rw_ios_per_sec": 0, 00:08:28.121 "rw_mbytes_per_sec": 0, 00:08:28.121 "r_mbytes_per_sec": 0, 00:08:28.121 "w_mbytes_per_sec": 0 00:08:28.121 }, 00:08:28.121 "claimed": true, 00:08:28.121 "claim_type": "exclusive_write", 00:08:28.121 "zoned": false, 00:08:28.121 "supported_io_types": { 00:08:28.121 "read": true, 00:08:28.121 "write": true, 00:08:28.121 "unmap": true, 00:08:28.121 "write_zeroes": true, 00:08:28.121 "flush": true, 00:08:28.121 "reset": true, 00:08:28.121 "compare": false, 00:08:28.121 "compare_and_write": false, 00:08:28.121 "abort": true, 00:08:28.121 "nvme_admin": false, 00:08:28.121 "nvme_io": false 00:08:28.121 }, 00:08:28.121 "memory_domains": [ 00:08:28.121 { 00:08:28.121 "dma_device_id": "system", 00:08:28.121 "dma_device_type": 1 00:08:28.121 }, 00:08:28.121 { 00:08:28.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.121 "dma_device_type": 2 00:08:28.121 } 00:08:28.121 ], 00:08:28.121 "driver_specific": {} 00:08:28.121 } 00:08:28.121 ]' 00:08:28.121 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:28.380 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:28.380 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:28.380 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:28.380 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:28.380 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:28.380 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:28.380 20:55:19 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:29.318 20:55:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:29.318 20:55:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:29.318 20:55:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:29.318 20:55:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:29.318 20:55:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:31.274 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:31.533 20:55:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.474 ************************************ 00:08:32.474 START TEST filesystem_in_capsule_ext4 00:08:32.474 ************************************ 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:32.474 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:32.474 mke2fs 1.46.5 (30-Dec-2021) 00:08:32.732 Discarding device blocks: 0/522240 done 00:08:32.732 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:32.732 Filesystem UUID: ebe98af0-877f-4bc0-9b3d-dce35ffa4276 00:08:32.732 Superblock backups stored on blocks: 00:08:32.732 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:32.732 00:08:32.732 Allocating group tables: 0/64 done 00:08:32.732 Writing inode tables: 0/64 done 00:08:32.732 Creating journal (8192 blocks): done 00:08:32.732 Writing superblocks and filesystem accounting information: 0/64 done 00:08:32.732 00:08:32.732 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3388995 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.733 00:08:32.733 real 0m0.188s 00:08:32.733 user 0m0.023s 00:08:32.733 sys 0m0.082s 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:32.733 ************************************ 00:08:32.733 END TEST filesystem_in_capsule_ext4 00:08:32.733 ************************************ 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.733 ************************************ 00:08:32.733 START TEST filesystem_in_capsule_btrfs 00:08:32.733 ************************************ 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:32.733 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:32.991 btrfs-progs v6.6.2 00:08:32.991 See https://btrfs.readthedocs.io for more information. 00:08:32.991 00:08:32.991 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:32.991 NOTE: several default settings have changed in version 5.15, please make sure 00:08:32.991 this does not affect your deployments: 00:08:32.991 - DUP for metadata (-m dup) 00:08:32.991 - enabled no-holes (-O no-holes) 00:08:32.991 - enabled free-space-tree (-R free-space-tree) 00:08:32.991 00:08:32.991 Label: (null) 00:08:32.991 UUID: 74b91267-f2b6-4860-b20e-3e6e03eceec5 00:08:32.991 Node size: 16384 00:08:32.991 Sector size: 4096 00:08:32.991 Filesystem size: 510.00MiB 00:08:32.991 Block group profiles: 00:08:32.991 Data: single 8.00MiB 00:08:32.991 Metadata: DUP 32.00MiB 00:08:32.991 System: DUP 8.00MiB 00:08:32.991 SSD detected: yes 00:08:32.991 Zoned device: no 00:08:32.991 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:32.991 Runtime features: free-space-tree 00:08:32.991 Checksum: crc32c 00:08:32.991 Number of devices: 1 00:08:32.991 Devices: 00:08:32.991 ID SIZE PATH 00:08:32.991 1 510.00MiB /dev/nvme0n1p1 00:08:32.991 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3388995 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.991 00:08:32.991 real 0m0.258s 00:08:32.991 user 0m0.034s 00:08:32.991 sys 0m0.135s 00:08:32.991 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.992 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:32.992 ************************************ 00:08:32.992 END TEST filesystem_in_capsule_btrfs 00:08:32.992 ************************************ 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.250 ************************************ 00:08:33.250 START TEST filesystem_in_capsule_xfs 00:08:33.250 ************************************ 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:33.250 20:55:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:33.250 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:33.250 = sectsz=512 attr=2, projid32bit=1 00:08:33.250 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:33.250 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:33.250 data = bsize=4096 blocks=130560, imaxpct=25 00:08:33.250 = sunit=0 swidth=0 blks 00:08:33.250 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:33.250 log =internal log bsize=4096 blocks=16384, version=2 00:08:33.250 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:33.250 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:33.250 Discarding blocks...Done. 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3388995 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:33.250 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:33.509 00:08:33.509 real 0m0.200s 00:08:33.509 user 0m0.025s 00:08:33.509 sys 0m0.080s 00:08:33.509 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.509 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:33.509 ************************************ 00:08:33.509 END TEST filesystem_in_capsule_xfs 00:08:33.509 ************************************ 00:08:33.509 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:33.509 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:33.509 20:55:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3388995 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3388995 ']' 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3388995 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3388995 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3388995' 00:08:34.443 killing process with pid 3388995 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3388995 00:08:34.443 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3388995 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:35.010 00:08:35.010 real 0m7.336s 00:08:35.010 user 0m28.492s 00:08:35.010 sys 0m1.236s 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 ************************************ 00:08:35.010 END TEST nvmf_filesystem_in_capsule 00:08:35.010 ************************************ 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:35.010 rmmod nvme_rdma 00:08:35.010 rmmod nvme_fabrics 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:35.010 00:08:35.010 real 0m22.236s 00:08:35.010 user 1m1.499s 00:08:35.010 sys 0m7.563s 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.010 20:55:25 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 ************************************ 00:08:35.010 END TEST nvmf_filesystem 00:08:35.010 ************************************ 00:08:35.010 20:55:25 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:35.010 20:55:25 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:35.010 20:55:25 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.010 20:55:25 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:35.010 ************************************ 00:08:35.010 START TEST nvmf_target_discovery 00:08:35.010 ************************************ 00:08:35.010 20:55:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:35.270 * Looking for test storage... 00:08:35.270 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:35.270 20:55:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:41.848 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:41.848 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:41.848 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:41.848 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.848 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:41.849 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.849 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:41.849 altname enp217s0f0np0 00:08:41.849 altname ens818f0np0 00:08:41.849 inet 192.168.100.8/24 scope global mlx_0_0 00:08:41.849 valid_lft forever preferred_lft forever 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:41.849 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.849 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:41.849 altname enp217s0f1np1 00:08:41.849 altname ens818f1np1 00:08:41.849 inet 192.168.100.9/24 scope global mlx_0_1 00:08:41.849 valid_lft forever preferred_lft forever 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:41.849 192.168.100.9' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:41.849 192.168.100.9' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:41.849 192.168.100.9' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3393721 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3393721 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3393721 ']' 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:41.849 20:55:32 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:41.849 [2024-07-13 20:55:32.584079] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:41.849 [2024-07-13 20:55:32.584131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.849 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.849 [2024-07-13 20:55:32.657248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.849 [2024-07-13 20:55:32.696908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.849 [2024-07-13 20:55:32.696954] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.849 [2024-07-13 20:55:32.696963] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.849 [2024-07-13 20:55:32.696972] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.849 [2024-07-13 20:55:32.696979] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.849 [2024-07-13 20:55:32.697039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.849 [2024-07-13 20:55:32.697144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.849 [2024-07-13 20:55:32.697231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.849 [2024-07-13 20:55:32.697234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 [2024-07-13 20:55:33.465434] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13bbc80/0x13c0170) succeed. 00:08:42.786 [2024-07-13 20:55:33.475727] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13bd2c0/0x1401800) succeed. 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 Null1 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 [2024-07-13 20:55:33.639029] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 Null2 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.786 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 Null3 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 Null4 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.045 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:08:43.045 00:08:43.045 Discovery Log Number of Records 6, Generation counter 6 00:08:43.045 =====Discovery Log Entry 0====== 00:08:43.045 trtype: rdma 00:08:43.045 adrfam: ipv4 00:08:43.045 subtype: current discovery subsystem 00:08:43.045 treq: not required 00:08:43.045 portid: 0 00:08:43.045 trsvcid: 4420 00:08:43.045 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:43.045 traddr: 192.168.100.8 00:08:43.045 eflags: explicit discovery connections, duplicate discovery information 00:08:43.045 rdma_prtype: not specified 00:08:43.045 rdma_qptype: connected 00:08:43.045 rdma_cms: rdma-cm 00:08:43.045 rdma_pkey: 0x0000 00:08:43.045 =====Discovery Log Entry 1====== 00:08:43.045 trtype: rdma 00:08:43.045 adrfam: ipv4 00:08:43.045 subtype: nvme subsystem 00:08:43.045 treq: not required 00:08:43.045 portid: 0 00:08:43.045 trsvcid: 4420 00:08:43.045 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:43.045 traddr: 192.168.100.8 00:08:43.045 eflags: none 00:08:43.045 rdma_prtype: not specified 00:08:43.045 rdma_qptype: connected 00:08:43.045 rdma_cms: rdma-cm 00:08:43.045 rdma_pkey: 0x0000 00:08:43.045 =====Discovery Log Entry 2====== 00:08:43.045 trtype: rdma 00:08:43.045 adrfam: ipv4 00:08:43.045 subtype: nvme subsystem 00:08:43.045 treq: not required 00:08:43.045 portid: 0 00:08:43.045 trsvcid: 4420 00:08:43.045 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:43.045 traddr: 192.168.100.8 00:08:43.045 eflags: none 00:08:43.045 rdma_prtype: not specified 00:08:43.045 rdma_qptype: connected 00:08:43.045 rdma_cms: rdma-cm 00:08:43.045 rdma_pkey: 0x0000 00:08:43.045 =====Discovery Log Entry 3====== 00:08:43.045 trtype: rdma 00:08:43.045 adrfam: ipv4 00:08:43.045 subtype: nvme subsystem 00:08:43.045 treq: not required 00:08:43.045 portid: 0 00:08:43.045 trsvcid: 4420 00:08:43.045 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:43.045 traddr: 192.168.100.8 00:08:43.045 eflags: none 00:08:43.045 rdma_prtype: not specified 00:08:43.045 rdma_qptype: connected 00:08:43.045 rdma_cms: rdma-cm 00:08:43.045 rdma_pkey: 0x0000 00:08:43.045 =====Discovery Log Entry 4====== 00:08:43.045 trtype: rdma 00:08:43.045 adrfam: ipv4 00:08:43.045 subtype: nvme subsystem 00:08:43.045 treq: not required 00:08:43.045 portid: 0 00:08:43.045 trsvcid: 4420 00:08:43.045 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:43.045 traddr: 192.168.100.8 00:08:43.045 eflags: none 00:08:43.045 rdma_prtype: not specified 00:08:43.045 rdma_qptype: connected 00:08:43.045 rdma_cms: rdma-cm 00:08:43.045 rdma_pkey: 0x0000 00:08:43.045 =====Discovery Log Entry 5====== 00:08:43.045 trtype: rdma 00:08:43.045 adrfam: ipv4 00:08:43.045 subtype: discovery subsystem referral 00:08:43.045 treq: not required 00:08:43.045 portid: 0 00:08:43.045 trsvcid: 4430 00:08:43.045 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:43.045 traddr: 192.168.100.8 00:08:43.045 eflags: none 00:08:43.045 rdma_prtype: unrecognized 00:08:43.045 rdma_qptype: unrecognized 00:08:43.046 rdma_cms: unrecognized 00:08:43.046 rdma_pkey: 0x0000 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:43.046 Perform nvmf subsystem discovery via RPC 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.046 [ 00:08:43.046 { 00:08:43.046 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:43.046 "subtype": "Discovery", 00:08:43.046 "listen_addresses": [ 00:08:43.046 { 00:08:43.046 "trtype": "RDMA", 00:08:43.046 "adrfam": "IPv4", 00:08:43.046 "traddr": "192.168.100.8", 00:08:43.046 "trsvcid": "4420" 00:08:43.046 } 00:08:43.046 ], 00:08:43.046 "allow_any_host": true, 00:08:43.046 "hosts": [] 00:08:43.046 }, 00:08:43.046 { 00:08:43.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:43.046 "subtype": "NVMe", 00:08:43.046 "listen_addresses": [ 00:08:43.046 { 00:08:43.046 "trtype": "RDMA", 00:08:43.046 "adrfam": "IPv4", 00:08:43.046 "traddr": "192.168.100.8", 00:08:43.046 "trsvcid": "4420" 00:08:43.046 } 00:08:43.046 ], 00:08:43.046 "allow_any_host": true, 00:08:43.046 "hosts": [], 00:08:43.046 "serial_number": "SPDK00000000000001", 00:08:43.046 "model_number": "SPDK bdev Controller", 00:08:43.046 "max_namespaces": 32, 00:08:43.046 "min_cntlid": 1, 00:08:43.046 "max_cntlid": 65519, 00:08:43.046 "namespaces": [ 00:08:43.046 { 00:08:43.046 "nsid": 1, 00:08:43.046 "bdev_name": "Null1", 00:08:43.046 "name": "Null1", 00:08:43.046 "nguid": "6B9A96E95B2748F08D6B6859A71493B3", 00:08:43.046 "uuid": "6b9a96e9-5b27-48f0-8d6b-6859a71493b3" 00:08:43.046 } 00:08:43.046 ] 00:08:43.046 }, 00:08:43.046 { 00:08:43.046 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:43.046 "subtype": "NVMe", 00:08:43.046 "listen_addresses": [ 00:08:43.046 { 00:08:43.046 "trtype": "RDMA", 00:08:43.046 "adrfam": "IPv4", 00:08:43.046 "traddr": "192.168.100.8", 00:08:43.046 "trsvcid": "4420" 00:08:43.046 } 00:08:43.046 ], 00:08:43.046 "allow_any_host": true, 00:08:43.046 "hosts": [], 00:08:43.046 "serial_number": "SPDK00000000000002", 00:08:43.046 "model_number": "SPDK bdev Controller", 00:08:43.046 "max_namespaces": 32, 00:08:43.046 "min_cntlid": 1, 00:08:43.046 "max_cntlid": 65519, 00:08:43.046 "namespaces": [ 00:08:43.046 { 00:08:43.046 "nsid": 1, 00:08:43.046 "bdev_name": "Null2", 00:08:43.046 "name": "Null2", 00:08:43.046 "nguid": "5F84927BA4D24746A9F361296917773F", 00:08:43.046 "uuid": "5f84927b-a4d2-4746-a9f3-61296917773f" 00:08:43.046 } 00:08:43.046 ] 00:08:43.046 }, 00:08:43.046 { 00:08:43.046 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:43.046 "subtype": "NVMe", 00:08:43.046 "listen_addresses": [ 00:08:43.046 { 00:08:43.046 "trtype": "RDMA", 00:08:43.046 "adrfam": "IPv4", 00:08:43.046 "traddr": "192.168.100.8", 00:08:43.046 "trsvcid": "4420" 00:08:43.046 } 00:08:43.046 ], 00:08:43.046 "allow_any_host": true, 00:08:43.046 "hosts": [], 00:08:43.046 "serial_number": "SPDK00000000000003", 00:08:43.046 "model_number": "SPDK bdev Controller", 00:08:43.046 "max_namespaces": 32, 00:08:43.046 "min_cntlid": 1, 00:08:43.046 "max_cntlid": 65519, 00:08:43.046 "namespaces": [ 00:08:43.046 { 00:08:43.046 "nsid": 1, 00:08:43.046 "bdev_name": "Null3", 00:08:43.046 "name": "Null3", 00:08:43.046 "nguid": "8FC69454846A421F91A8D0D1EA3B8CC3", 00:08:43.046 "uuid": "8fc69454-846a-421f-91a8-d0d1ea3b8cc3" 00:08:43.046 } 00:08:43.046 ] 00:08:43.046 }, 00:08:43.046 { 00:08:43.046 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:43.046 "subtype": "NVMe", 00:08:43.046 "listen_addresses": [ 00:08:43.046 { 00:08:43.046 "trtype": "RDMA", 00:08:43.046 "adrfam": "IPv4", 00:08:43.046 "traddr": "192.168.100.8", 00:08:43.046 "trsvcid": "4420" 00:08:43.046 } 00:08:43.046 ], 00:08:43.046 "allow_any_host": true, 00:08:43.046 "hosts": [], 00:08:43.046 "serial_number": "SPDK00000000000004", 00:08:43.046 "model_number": "SPDK bdev Controller", 00:08:43.046 "max_namespaces": 32, 00:08:43.046 "min_cntlid": 1, 00:08:43.046 "max_cntlid": 65519, 00:08:43.046 "namespaces": [ 00:08:43.046 { 00:08:43.046 "nsid": 1, 00:08:43.046 "bdev_name": "Null4", 00:08:43.046 "name": "Null4", 00:08:43.046 "nguid": "C221E1D4C5244920AF0C2363954B7857", 00:08:43.046 "uuid": "c221e1d4-c524-4920-af0c-2363954b7857" 00:08:43.046 } 00:08:43.046 ] 00:08:43.046 } 00:08:43.046 ] 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.046 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.305 20:55:33 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:43.305 rmmod nvme_rdma 00:08:43.305 rmmod nvme_fabrics 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3393721 ']' 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3393721 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3393721 ']' 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3393721 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3393721 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3393721' 00:08:43.305 killing process with pid 3393721 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3393721 00:08:43.305 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3393721 00:08:43.563 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.563 20:55:34 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:43.563 00:08:43.563 real 0m8.516s 00:08:43.563 user 0m8.473s 00:08:43.563 sys 0m5.535s 00:08:43.563 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:43.563 20:55:34 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.563 ************************************ 00:08:43.563 END TEST nvmf_target_discovery 00:08:43.563 ************************************ 00:08:43.563 20:55:34 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:43.563 20:55:34 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:43.563 20:55:34 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:43.563 20:55:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:43.563 ************************************ 00:08:43.563 START TEST nvmf_referrals 00:08:43.563 ************************************ 00:08:43.563 20:55:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:43.822 * Looking for test storage... 00:08:43.822 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.822 20:55:34 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.823 20:55:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:50.393 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:50.393 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:50.393 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:50.393 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:50.393 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:50.654 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:50.654 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:50.654 altname enp217s0f0np0 00:08:50.654 altname ens818f0np0 00:08:50.654 inet 192.168.100.8/24 scope global mlx_0_0 00:08:50.654 valid_lft forever preferred_lft forever 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:50.654 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:50.654 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:50.654 altname enp217s0f1np1 00:08:50.654 altname ens818f1np1 00:08:50.654 inet 192.168.100.9/24 scope global mlx_0_1 00:08:50.654 valid_lft forever preferred_lft forever 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:50.654 192.168.100.9' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:50.654 192.168.100.9' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:50.654 192.168.100.9' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3397350 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3397350 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3397350 ']' 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:50.654 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.655 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:50.655 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:50.655 [2024-07-13 20:55:41.532092] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:50.655 [2024-07-13 20:55:41.532141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.914 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.914 [2024-07-13 20:55:41.602089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.914 [2024-07-13 20:55:41.641969] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.914 [2024-07-13 20:55:41.642031] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.914 [2024-07-13 20:55:41.642041] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.914 [2024-07-13 20:55:41.642050] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.914 [2024-07-13 20:55:41.642057] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.914 [2024-07-13 20:55:41.642112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.914 [2024-07-13 20:55:41.642205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.914 [2024-07-13 20:55:41.642291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.914 [2024-07-13 20:55:41.642292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.914 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:50.914 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:50.914 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.914 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.914 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:50.914 20:55:41 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.914 20:55:41 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:50.914 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.914 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.173 [2024-07-13 20:55:41.813359] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20aac80/0x20af170) succeed. 00:08:51.173 [2024-07-13 20:55:41.823669] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20ac2c0/0x20f0800) succeed. 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.173 [2024-07-13 20:55:41.945700] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.173 20:55:41 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.173 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:51.173 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:51.173 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:51.173 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.173 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:51.173 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.173 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.173 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:51.173 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.431 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:51.431 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:51.431 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:51.431 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:51.432 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:51.690 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:51.949 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:52.208 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:52.208 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:52.208 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:52.208 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:52.208 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:52.208 20:55:42 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:52.208 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:52.467 rmmod nvme_rdma 00:08:52.467 rmmod nvme_fabrics 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3397350 ']' 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3397350 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3397350 ']' 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3397350 00:08:52.467 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:52.468 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:52.468 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3397350 00:08:52.468 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:52.468 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:52.468 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3397350' 00:08:52.468 killing process with pid 3397350 00:08:52.468 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3397350 00:08:52.468 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3397350 00:08:52.727 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:52.727 20:55:43 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:52.727 00:08:52.727 real 0m9.045s 00:08:52.727 user 0m9.828s 00:08:52.727 sys 0m6.045s 00:08:52.727 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:52.727 20:55:43 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:52.727 ************************************ 00:08:52.727 END TEST nvmf_referrals 00:08:52.727 ************************************ 00:08:52.727 20:55:43 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:52.727 20:55:43 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:52.727 20:55:43 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:52.727 20:55:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:52.727 ************************************ 00:08:52.727 START TEST nvmf_connect_disconnect 00:08:52.727 ************************************ 00:08:52.728 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:52.988 * Looking for test storage... 00:08:52.988 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.988 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.989 20:55:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:59.602 20:55:49 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.602 20:55:49 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:59.602 20:55:49 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:59.602 20:55:49 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:59.602 20:55:49 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:59.602 20:55:49 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:59.602 20:55:49 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:59.602 20:55:49 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:59.602 20:55:49 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:59.602 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:59.602 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:59.602 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:59.603 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:59.603 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:59.603 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:59.603 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:59.603 altname enp217s0f0np0 00:08:59.603 altname ens818f0np0 00:08:59.603 inet 192.168.100.8/24 scope global mlx_0_0 00:08:59.603 valid_lft forever preferred_lft forever 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:59.603 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:59.603 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:59.603 altname enp217s0f1np1 00:08:59.603 altname ens818f1np1 00:08:59.603 inet 192.168.100.9/24 scope global mlx_0_1 00:08:59.603 valid_lft forever preferred_lft forever 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:59.603 192.168.100.9' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:59.603 192.168.100.9' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:59.603 192.168.100.9' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:59.603 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3401135 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3401135 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3401135 ']' 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:59.604 20:55:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:59.604 [2024-07-13 20:55:50.313184] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:59.604 [2024-07-13 20:55:50.313235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.604 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.604 [2024-07-13 20:55:50.386367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.604 [2024-07-13 20:55:50.425442] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.604 [2024-07-13 20:55:50.425488] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.604 [2024-07-13 20:55:50.425497] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.604 [2024-07-13 20:55:50.425505] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.604 [2024-07-13 20:55:50.425512] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.604 [2024-07-13 20:55:50.425569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.604 [2024-07-13 20:55:50.425666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.604 [2024-07-13 20:55:50.425730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.604 [2024-07-13 20:55:50.425732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.548 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:00.548 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.549 [2024-07-13 20:55:51.177041] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:00.549 [2024-07-13 20:55:51.199413] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a41c80/0x1a46170) succeed. 00:09:00.549 [2024-07-13 20:55:51.209701] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a432c0/0x1a87800) succeed. 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:00.549 [2024-07-13 20:55:51.349615] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:00.549 20:55:51 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:03.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:14.018 rmmod nvme_rdma 00:14:14.018 rmmod nvme_fabrics 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3401135 ']' 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3401135 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3401135 ']' 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3401135 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3401135 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3401135' 00:14:14.018 killing process with pid 3401135 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3401135 00:14:14.018 21:01:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3401135 00:14:14.278 21:01:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:14.278 21:01:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:14.278 00:14:14.278 real 5m21.468s 00:14:14.278 user 20m54.888s 00:14:14.278 sys 0m17.061s 00:14:14.278 21:01:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:14.278 21:01:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:14.278 ************************************ 00:14:14.278 END TEST nvmf_connect_disconnect 00:14:14.278 ************************************ 00:14:14.278 21:01:05 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:14.278 21:01:05 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:14.278 21:01:05 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:14.278 21:01:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:14.278 ************************************ 00:14:14.278 START TEST nvmf_multitarget 00:14:14.278 ************************************ 00:14:14.278 21:01:05 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:14.538 * Looking for test storage... 00:14:14.538 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.538 21:01:05 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:14.539 21:01:05 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.112 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:21.113 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:21.113 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:21.113 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:21.113 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:21.113 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.113 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:21.113 altname enp217s0f0np0 00:14:21.113 altname ens818f0np0 00:14:21.113 inet 192.168.100.8/24 scope global mlx_0_0 00:14:21.113 valid_lft forever preferred_lft forever 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:21.113 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.113 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:21.113 altname enp217s0f1np1 00:14:21.113 altname ens818f1np1 00:14:21.113 inet 192.168.100.9/24 scope global mlx_0_1 00:14:21.113 valid_lft forever preferred_lft forever 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:21.113 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:21.114 192.168.100.9' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:21.114 192.168.100.9' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:21.114 192.168.100.9' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3460373 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3460373 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3460373 ']' 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:21.114 [2024-07-13 21:01:11.772842] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:21.114 [2024-07-13 21:01:11.772893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.114 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.114 [2024-07-13 21:01:11.843090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.114 [2024-07-13 21:01:11.882894] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.114 [2024-07-13 21:01:11.882939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.114 [2024-07-13 21:01:11.882948] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.114 [2024-07-13 21:01:11.882956] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.114 [2024-07-13 21:01:11.882964] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.114 [2024-07-13 21:01:11.883007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.114 [2024-07-13 21:01:11.883106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.114 [2024-07-13 21:01:11.883121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.114 [2024-07-13 21:01:11.883123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.114 21:01:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:21.373 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.373 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:21.373 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:21.373 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:21.373 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:21.373 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:21.373 "nvmf_tgt_1" 00:14:21.373 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:21.631 "nvmf_tgt_2" 00:14:21.632 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:21.632 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:21.632 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:21.632 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:21.890 true 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:21.890 true 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.890 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:21.890 rmmod nvme_rdma 00:14:21.890 rmmod nvme_fabrics 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3460373 ']' 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3460373 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3460373 ']' 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3460373 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3460373 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3460373' 00:14:22.149 killing process with pid 3460373 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3460373 00:14:22.149 21:01:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3460373 00:14:22.149 21:01:13 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.149 21:01:13 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:22.149 00:14:22.149 real 0m7.906s 00:14:22.149 user 0m6.997s 00:14:22.149 sys 0m5.380s 00:14:22.149 21:01:13 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:22.149 21:01:13 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:22.149 ************************************ 00:14:22.149 END TEST nvmf_multitarget 00:14:22.149 ************************************ 00:14:22.408 21:01:13 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:22.408 21:01:13 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:22.408 21:01:13 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:22.408 21:01:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:22.408 ************************************ 00:14:22.408 START TEST nvmf_rpc 00:14:22.408 ************************************ 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:22.408 * Looking for test storage... 00:14:22.408 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.408 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.409 21:01:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.409 21:01:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.409 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:22.409 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:22.409 21:01:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.409 21:01:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:28.974 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:28.975 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:28.975 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:28.975 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:28.975 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:14:28.975 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:29.235 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:29.235 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:29.235 altname enp217s0f0np0 00:14:29.235 altname ens818f0np0 00:14:29.235 inet 192.168.100.8/24 scope global mlx_0_0 00:14:29.235 valid_lft forever preferred_lft forever 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:29.235 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:29.235 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:29.235 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:29.235 altname enp217s0f1np1 00:14:29.235 altname ens818f1np1 00:14:29.236 inet 192.168.100.9/24 scope global mlx_0_1 00:14:29.236 valid_lft forever preferred_lft forever 00:14:29.236 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:29.236 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.236 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:29.236 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:29.236 21:01:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:29.236 192.168.100.9' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:29.236 192.168.100.9' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:29.236 192.168.100.9' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3464062 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3464062 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3464062 ']' 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:29.236 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.495 [2024-07-13 21:01:20.158501] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:29.495 [2024-07-13 21:01:20.158549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.496 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.496 [2024-07-13 21:01:20.231516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.496 [2024-07-13 21:01:20.272396] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.496 [2024-07-13 21:01:20.272439] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.496 [2024-07-13 21:01:20.272449] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.496 [2024-07-13 21:01:20.272457] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.496 [2024-07-13 21:01:20.272481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.496 [2024-07-13 21:01:20.272533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.496 [2024-07-13 21:01:20.272647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.496 [2024-07-13 21:01:20.272733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.496 [2024-07-13 21:01:20.272734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.432 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:30.432 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:14:30.432 21:01:20 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.432 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.432 21:01:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 21:01:21 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.432 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:30.432 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.432 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.432 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.432 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:30.432 "tick_rate": 2500000000, 00:14:30.432 "poll_groups": [ 00:14:30.432 { 00:14:30.432 "name": "nvmf_tgt_poll_group_000", 00:14:30.432 "admin_qpairs": 0, 00:14:30.432 "io_qpairs": 0, 00:14:30.432 "current_admin_qpairs": 0, 00:14:30.432 "current_io_qpairs": 0, 00:14:30.432 "pending_bdev_io": 0, 00:14:30.432 "completed_nvme_io": 0, 00:14:30.432 "transports": [] 00:14:30.432 }, 00:14:30.432 { 00:14:30.432 "name": "nvmf_tgt_poll_group_001", 00:14:30.432 "admin_qpairs": 0, 00:14:30.432 "io_qpairs": 0, 00:14:30.432 "current_admin_qpairs": 0, 00:14:30.432 "current_io_qpairs": 0, 00:14:30.432 "pending_bdev_io": 0, 00:14:30.432 "completed_nvme_io": 0, 00:14:30.432 "transports": [] 00:14:30.432 }, 00:14:30.432 { 00:14:30.432 "name": "nvmf_tgt_poll_group_002", 00:14:30.432 "admin_qpairs": 0, 00:14:30.432 "io_qpairs": 0, 00:14:30.432 "current_admin_qpairs": 0, 00:14:30.432 "current_io_qpairs": 0, 00:14:30.432 "pending_bdev_io": 0, 00:14:30.432 "completed_nvme_io": 0, 00:14:30.432 "transports": [] 00:14:30.432 }, 00:14:30.432 { 00:14:30.432 "name": "nvmf_tgt_poll_group_003", 00:14:30.432 "admin_qpairs": 0, 00:14:30.432 "io_qpairs": 0, 00:14:30.432 "current_admin_qpairs": 0, 00:14:30.433 "current_io_qpairs": 0, 00:14:30.433 "pending_bdev_io": 0, 00:14:30.433 "completed_nvme_io": 0, 00:14:30.433 "transports": [] 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 }' 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.433 [2024-07-13 21:01:21.143338] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2239cb0/0x223e1a0) succeed. 00:14:30.433 [2024-07-13 21:01:21.153608] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x223b2f0/0x227f830) succeed. 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.433 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:30.433 "tick_rate": 2500000000, 00:14:30.433 "poll_groups": [ 00:14:30.433 { 00:14:30.433 "name": "nvmf_tgt_poll_group_000", 00:14:30.433 "admin_qpairs": 0, 00:14:30.433 "io_qpairs": 0, 00:14:30.433 "current_admin_qpairs": 0, 00:14:30.433 "current_io_qpairs": 0, 00:14:30.433 "pending_bdev_io": 0, 00:14:30.433 "completed_nvme_io": 0, 00:14:30.433 "transports": [ 00:14:30.433 { 00:14:30.433 "trtype": "RDMA", 00:14:30.433 "pending_data_buffer": 0, 00:14:30.433 "devices": [ 00:14:30.433 { 00:14:30.433 "name": "mlx5_0", 00:14:30.433 "polls": 15379, 00:14:30.433 "idle_polls": 15379, 00:14:30.433 "completions": 0, 00:14:30.433 "requests": 0, 00:14:30.433 "request_latency": 0, 00:14:30.433 "pending_free_request": 0, 00:14:30.433 "pending_rdma_read": 0, 00:14:30.433 "pending_rdma_write": 0, 00:14:30.433 "pending_rdma_send": 0, 00:14:30.433 "total_send_wrs": 0, 00:14:30.433 "send_doorbell_updates": 0, 00:14:30.433 "total_recv_wrs": 4096, 00:14:30.433 "recv_doorbell_updates": 1 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "mlx5_1", 00:14:30.433 "polls": 15379, 00:14:30.433 "idle_polls": 15379, 00:14:30.433 "completions": 0, 00:14:30.433 "requests": 0, 00:14:30.433 "request_latency": 0, 00:14:30.433 "pending_free_request": 0, 00:14:30.433 "pending_rdma_read": 0, 00:14:30.433 "pending_rdma_write": 0, 00:14:30.433 "pending_rdma_send": 0, 00:14:30.433 "total_send_wrs": 0, 00:14:30.433 "send_doorbell_updates": 0, 00:14:30.433 "total_recv_wrs": 4096, 00:14:30.433 "recv_doorbell_updates": 1 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "nvmf_tgt_poll_group_001", 00:14:30.433 "admin_qpairs": 0, 00:14:30.433 "io_qpairs": 0, 00:14:30.433 "current_admin_qpairs": 0, 00:14:30.433 "current_io_qpairs": 0, 00:14:30.433 "pending_bdev_io": 0, 00:14:30.433 "completed_nvme_io": 0, 00:14:30.433 "transports": [ 00:14:30.433 { 00:14:30.433 "trtype": "RDMA", 00:14:30.433 "pending_data_buffer": 0, 00:14:30.433 "devices": [ 00:14:30.433 { 00:14:30.433 "name": "mlx5_0", 00:14:30.433 "polls": 9562, 00:14:30.433 "idle_polls": 9562, 00:14:30.433 "completions": 0, 00:14:30.433 "requests": 0, 00:14:30.433 "request_latency": 0, 00:14:30.433 "pending_free_request": 0, 00:14:30.433 "pending_rdma_read": 0, 00:14:30.433 "pending_rdma_write": 0, 00:14:30.433 "pending_rdma_send": 0, 00:14:30.433 "total_send_wrs": 0, 00:14:30.433 "send_doorbell_updates": 0, 00:14:30.433 "total_recv_wrs": 4096, 00:14:30.433 "recv_doorbell_updates": 1 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "mlx5_1", 00:14:30.433 "polls": 9562, 00:14:30.433 "idle_polls": 9562, 00:14:30.433 "completions": 0, 00:14:30.433 "requests": 0, 00:14:30.433 "request_latency": 0, 00:14:30.433 "pending_free_request": 0, 00:14:30.433 "pending_rdma_read": 0, 00:14:30.433 "pending_rdma_write": 0, 00:14:30.433 "pending_rdma_send": 0, 00:14:30.433 "total_send_wrs": 0, 00:14:30.433 "send_doorbell_updates": 0, 00:14:30.433 "total_recv_wrs": 4096, 00:14:30.433 "recv_doorbell_updates": 1 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "nvmf_tgt_poll_group_002", 00:14:30.433 "admin_qpairs": 0, 00:14:30.433 "io_qpairs": 0, 00:14:30.433 "current_admin_qpairs": 0, 00:14:30.433 "current_io_qpairs": 0, 00:14:30.433 "pending_bdev_io": 0, 00:14:30.433 "completed_nvme_io": 0, 00:14:30.433 "transports": [ 00:14:30.433 { 00:14:30.433 "trtype": "RDMA", 00:14:30.433 "pending_data_buffer": 0, 00:14:30.433 "devices": [ 00:14:30.433 { 00:14:30.433 "name": "mlx5_0", 00:14:30.433 "polls": 5335, 00:14:30.433 "idle_polls": 5335, 00:14:30.433 "completions": 0, 00:14:30.433 "requests": 0, 00:14:30.433 "request_latency": 0, 00:14:30.433 "pending_free_request": 0, 00:14:30.433 "pending_rdma_read": 0, 00:14:30.433 "pending_rdma_write": 0, 00:14:30.433 "pending_rdma_send": 0, 00:14:30.433 "total_send_wrs": 0, 00:14:30.433 "send_doorbell_updates": 0, 00:14:30.433 "total_recv_wrs": 4096, 00:14:30.433 "recv_doorbell_updates": 1 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "mlx5_1", 00:14:30.433 "polls": 5335, 00:14:30.433 "idle_polls": 5335, 00:14:30.433 "completions": 0, 00:14:30.433 "requests": 0, 00:14:30.433 "request_latency": 0, 00:14:30.433 "pending_free_request": 0, 00:14:30.433 "pending_rdma_read": 0, 00:14:30.433 "pending_rdma_write": 0, 00:14:30.433 "pending_rdma_send": 0, 00:14:30.433 "total_send_wrs": 0, 00:14:30.433 "send_doorbell_updates": 0, 00:14:30.433 "total_recv_wrs": 4096, 00:14:30.433 "recv_doorbell_updates": 1 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "nvmf_tgt_poll_group_003", 00:14:30.433 "admin_qpairs": 0, 00:14:30.433 "io_qpairs": 0, 00:14:30.433 "current_admin_qpairs": 0, 00:14:30.433 "current_io_qpairs": 0, 00:14:30.433 "pending_bdev_io": 0, 00:14:30.433 "completed_nvme_io": 0, 00:14:30.433 "transports": [ 00:14:30.433 { 00:14:30.433 "trtype": "RDMA", 00:14:30.433 "pending_data_buffer": 0, 00:14:30.433 "devices": [ 00:14:30.433 { 00:14:30.433 "name": "mlx5_0", 00:14:30.433 "polls": 876, 00:14:30.433 "idle_polls": 876, 00:14:30.433 "completions": 0, 00:14:30.433 "requests": 0, 00:14:30.433 "request_latency": 0, 00:14:30.433 "pending_free_request": 0, 00:14:30.433 "pending_rdma_read": 0, 00:14:30.433 "pending_rdma_write": 0, 00:14:30.433 "pending_rdma_send": 0, 00:14:30.433 "total_send_wrs": 0, 00:14:30.433 "send_doorbell_updates": 0, 00:14:30.433 "total_recv_wrs": 4096, 00:14:30.433 "recv_doorbell_updates": 1 00:14:30.433 }, 00:14:30.433 { 00:14:30.433 "name": "mlx5_1", 00:14:30.433 "polls": 876, 00:14:30.433 "idle_polls": 876, 00:14:30.433 "completions": 0, 00:14:30.433 "requests": 0, 00:14:30.433 "request_latency": 0, 00:14:30.433 "pending_free_request": 0, 00:14:30.433 "pending_rdma_read": 0, 00:14:30.433 "pending_rdma_write": 0, 00:14:30.433 "pending_rdma_send": 0, 00:14:30.433 "total_send_wrs": 0, 00:14:30.433 "send_doorbell_updates": 0, 00:14:30.433 "total_recv_wrs": 4096, 00:14:30.433 "recv_doorbell_updates": 1 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 } 00:14:30.433 ] 00:14:30.433 }' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.691 Malloc1 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.691 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.949 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.949 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.949 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:30.949 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.949 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.949 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.949 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:30.949 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.949 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.950 [2024-07-13 21:01:21.608206] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:30.950 [2024-07-13 21:01:21.660140] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:30.950 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:30.950 could not add new controller: failed to write to nvme-fabrics device 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.950 21:01:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:31.885 21:01:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:31.885 21:01:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:31.885 21:01:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.885 21:01:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:31.885 21:01:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:33.820 21:01:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:33.820 21:01:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:33.820 21:01:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.820 21:01:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:33.820 21:01:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.820 21:01:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:33.820 21:01:24 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:35.200 [2024-07-13 21:01:25.752033] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:35.200 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:35.200 could not add new controller: failed to write to nvme-fabrics device 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.200 21:01:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:36.138 21:01:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.138 21:01:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:36.138 21:01:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.138 21:01:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:36.138 21:01:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:38.044 21:01:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:38.044 21:01:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:38.044 21:01:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.044 21:01:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:38.044 21:01:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.044 21:01:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:38.044 21:01:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.982 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.983 [2024-07-13 21:01:29.810030] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.983 21:01:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:39.921 21:01:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.921 21:01:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:39.921 21:01:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.921 21:01:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:39.921 21:01:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:42.456 21:01:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:42.456 21:01:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:42.456 21:01:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.456 21:01:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:42.456 21:01:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.456 21:01:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:42.456 21:01:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.023 [2024-07-13 21:01:33.824229] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.023 21:01:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:43.958 21:01:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:43.958 21:01:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:43.958 21:01:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.958 21:01:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:43.958 21:01:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:46.490 21:01:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:46.490 21:01:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:46.490 21:01:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.490 21:01:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:46.490 21:01:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.490 21:01:36 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:46.490 21:01:36 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.057 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.058 [2024-07-13 21:01:37.870652] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.058 21:01:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:47.994 21:01:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:47.994 21:01:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:47.994 21:01:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.994 21:01:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:47.994 21:01:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:50.528 21:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:50.528 21:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:50.528 21:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.528 21:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:50.528 21:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.528 21:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:50.528 21:01:40 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 [2024-07-13 21:01:41.924254] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.094 21:01:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:52.030 21:01:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:52.030 21:01:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:52.030 21:01:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.030 21:01:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:52.030 21:01:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:54.633 21:01:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:54.633 21:01:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:54.633 21:01:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.633 21:01:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:54.633 21:01:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.633 21:01:44 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:54.633 21:01:44 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.200 [2024-07-13 21:01:45.953465] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.200 21:01:45 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:56.136 21:01:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:56.136 21:01:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:14:56.136 21:01:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.136 21:01:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:56.136 21:01:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:14:58.669 21:01:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:58.669 21:01:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:58.669 21:01:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.669 21:01:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:58.669 21:01:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.669 21:01:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:14:58.669 21:01:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 [2024-07-13 21:01:49.981687] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 [2024-07-13 21:01:50.030423] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.239 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.240 [2024-07-13 21:01:50.082038] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.240 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.499 [2024-07-13 21:01:50.130210] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.499 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.499 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.499 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.499 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.499 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.499 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 [2024-07-13 21:01:50.178326] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.500 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:59.500 "tick_rate": 2500000000, 00:14:59.500 "poll_groups": [ 00:14:59.500 { 00:14:59.500 "name": "nvmf_tgt_poll_group_000", 00:14:59.500 "admin_qpairs": 2, 00:14:59.500 "io_qpairs": 27, 00:14:59.500 "current_admin_qpairs": 0, 00:14:59.500 "current_io_qpairs": 0, 00:14:59.500 "pending_bdev_io": 0, 00:14:59.500 "completed_nvme_io": 127, 00:14:59.500 "transports": [ 00:14:59.500 { 00:14:59.500 "trtype": "RDMA", 00:14:59.500 "pending_data_buffer": 0, 00:14:59.500 "devices": [ 00:14:59.500 { 00:14:59.500 "name": "mlx5_0", 00:14:59.500 "polls": 3552243, 00:14:59.500 "idle_polls": 3551915, 00:14:59.500 "completions": 367, 00:14:59.500 "requests": 183, 00:14:59.500 "request_latency": 36754706, 00:14:59.500 "pending_free_request": 0, 00:14:59.500 "pending_rdma_read": 0, 00:14:59.500 "pending_rdma_write": 0, 00:14:59.500 "pending_rdma_send": 0, 00:14:59.500 "total_send_wrs": 310, 00:14:59.500 "send_doorbell_updates": 162, 00:14:59.500 "total_recv_wrs": 4279, 00:14:59.500 "recv_doorbell_updates": 162 00:14:59.500 }, 00:14:59.500 { 00:14:59.500 "name": "mlx5_1", 00:14:59.500 "polls": 3552243, 00:14:59.500 "idle_polls": 3552243, 00:14:59.500 "completions": 0, 00:14:59.500 "requests": 0, 00:14:59.500 "request_latency": 0, 00:14:59.500 "pending_free_request": 0, 00:14:59.500 "pending_rdma_read": 0, 00:14:59.500 "pending_rdma_write": 0, 00:14:59.500 "pending_rdma_send": 0, 00:14:59.500 "total_send_wrs": 0, 00:14:59.500 "send_doorbell_updates": 0, 00:14:59.500 "total_recv_wrs": 4096, 00:14:59.500 "recv_doorbell_updates": 1 00:14:59.500 } 00:14:59.500 ] 00:14:59.500 } 00:14:59.500 ] 00:14:59.500 }, 00:14:59.500 { 00:14:59.500 "name": "nvmf_tgt_poll_group_001", 00:14:59.500 "admin_qpairs": 2, 00:14:59.500 "io_qpairs": 26, 00:14:59.500 "current_admin_qpairs": 0, 00:14:59.500 "current_io_qpairs": 0, 00:14:59.500 "pending_bdev_io": 0, 00:14:59.500 "completed_nvme_io": 79, 00:14:59.500 "transports": [ 00:14:59.500 { 00:14:59.500 "trtype": "RDMA", 00:14:59.500 "pending_data_buffer": 0, 00:14:59.500 "devices": [ 00:14:59.500 { 00:14:59.500 "name": "mlx5_0", 00:14:59.500 "polls": 3543430, 00:14:59.500 "idle_polls": 3543189, 00:14:59.500 "completions": 262, 00:14:59.500 "requests": 131, 00:14:59.500 "request_latency": 22301408, 00:14:59.500 "pending_free_request": 0, 00:14:59.500 "pending_rdma_read": 0, 00:14:59.500 "pending_rdma_write": 0, 00:14:59.500 "pending_rdma_send": 0, 00:14:59.500 "total_send_wrs": 208, 00:14:59.500 "send_doorbell_updates": 120, 00:14:59.500 "total_recv_wrs": 4227, 00:14:59.500 "recv_doorbell_updates": 121 00:14:59.500 }, 00:14:59.500 { 00:14:59.500 "name": "mlx5_1", 00:14:59.500 "polls": 3543430, 00:14:59.500 "idle_polls": 3543430, 00:14:59.500 "completions": 0, 00:14:59.500 "requests": 0, 00:14:59.500 "request_latency": 0, 00:14:59.500 "pending_free_request": 0, 00:14:59.500 "pending_rdma_read": 0, 00:14:59.500 "pending_rdma_write": 0, 00:14:59.500 "pending_rdma_send": 0, 00:14:59.500 "total_send_wrs": 0, 00:14:59.500 "send_doorbell_updates": 0, 00:14:59.500 "total_recv_wrs": 4096, 00:14:59.500 "recv_doorbell_updates": 1 00:14:59.500 } 00:14:59.500 ] 00:14:59.500 } 00:14:59.500 ] 00:14:59.500 }, 00:14:59.500 { 00:14:59.500 "name": "nvmf_tgt_poll_group_002", 00:14:59.500 "admin_qpairs": 1, 00:14:59.500 "io_qpairs": 26, 00:14:59.500 "current_admin_qpairs": 0, 00:14:59.500 "current_io_qpairs": 0, 00:14:59.500 "pending_bdev_io": 0, 00:14:59.500 "completed_nvme_io": 127, 00:14:59.500 "transports": [ 00:14:59.500 { 00:14:59.500 "trtype": "RDMA", 00:14:59.500 "pending_data_buffer": 0, 00:14:59.500 "devices": [ 00:14:59.500 { 00:14:59.500 "name": "mlx5_0", 00:14:59.500 "polls": 3523909, 00:14:59.500 "idle_polls": 3523636, 00:14:59.500 "completions": 313, 00:14:59.500 "requests": 156, 00:14:59.500 "request_latency": 36357456, 00:14:59.500 "pending_free_request": 0, 00:14:59.500 "pending_rdma_read": 0, 00:14:59.500 "pending_rdma_write": 0, 00:14:59.500 "pending_rdma_send": 0, 00:14:59.500 "total_send_wrs": 271, 00:14:59.500 "send_doorbell_updates": 131, 00:14:59.500 "total_recv_wrs": 4252, 00:14:59.500 "recv_doorbell_updates": 131 00:14:59.500 }, 00:14:59.500 { 00:14:59.500 "name": "mlx5_1", 00:14:59.500 "polls": 3523909, 00:14:59.500 "idle_polls": 3523909, 00:14:59.500 "completions": 0, 00:14:59.500 "requests": 0, 00:14:59.500 "request_latency": 0, 00:14:59.500 "pending_free_request": 0, 00:14:59.500 "pending_rdma_read": 0, 00:14:59.500 "pending_rdma_write": 0, 00:14:59.500 "pending_rdma_send": 0, 00:14:59.500 "total_send_wrs": 0, 00:14:59.500 "send_doorbell_updates": 0, 00:14:59.500 "total_recv_wrs": 4096, 00:14:59.500 "recv_doorbell_updates": 1 00:14:59.500 } 00:14:59.500 ] 00:14:59.500 } 00:14:59.500 ] 00:14:59.500 }, 00:14:59.500 { 00:14:59.500 "name": "nvmf_tgt_poll_group_003", 00:14:59.500 "admin_qpairs": 2, 00:14:59.500 "io_qpairs": 26, 00:14:59.500 "current_admin_qpairs": 0, 00:14:59.500 "current_io_qpairs": 0, 00:14:59.500 "pending_bdev_io": 0, 00:14:59.500 "completed_nvme_io": 122, 00:14:59.500 "transports": [ 00:14:59.500 { 00:14:59.500 "trtype": "RDMA", 00:14:59.500 "pending_data_buffer": 0, 00:14:59.500 "devices": [ 00:14:59.500 { 00:14:59.500 "name": "mlx5_0", 00:14:59.500 "polls": 2799414, 00:14:59.500 "idle_polls": 2799108, 00:14:59.500 "completions": 348, 00:14:59.500 "requests": 174, 00:14:59.500 "request_latency": 36683900, 00:14:59.500 "pending_free_request": 0, 00:14:59.500 "pending_rdma_read": 0, 00:14:59.500 "pending_rdma_write": 0, 00:14:59.500 "pending_rdma_send": 0, 00:14:59.500 "total_send_wrs": 294, 00:14:59.500 "send_doorbell_updates": 151, 00:14:59.500 "total_recv_wrs": 4270, 00:14:59.500 "recv_doorbell_updates": 152 00:14:59.500 }, 00:14:59.500 { 00:14:59.500 "name": "mlx5_1", 00:14:59.500 "polls": 2799414, 00:14:59.500 "idle_polls": 2799414, 00:14:59.500 "completions": 0, 00:14:59.500 "requests": 0, 00:14:59.500 "request_latency": 0, 00:14:59.500 "pending_free_request": 0, 00:14:59.500 "pending_rdma_read": 0, 00:14:59.500 "pending_rdma_write": 0, 00:14:59.500 "pending_rdma_send": 0, 00:14:59.500 "total_send_wrs": 0, 00:14:59.500 "send_doorbell_updates": 0, 00:14:59.500 "total_recv_wrs": 4096, 00:14:59.500 "recv_doorbell_updates": 1 00:14:59.501 } 00:14:59.501 ] 00:14:59.501 } 00:14:59.501 ] 00:14:59.501 } 00:14:59.501 ] 00:14:59.501 }' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:14:59.501 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 132097470 > 0 )) 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:59.760 rmmod nvme_rdma 00:14:59.760 rmmod nvme_fabrics 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3464062 ']' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3464062 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3464062 ']' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3464062 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3464062 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3464062' 00:14:59.760 killing process with pid 3464062 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3464062 00:14:59.760 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3464062 00:15:00.020 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.020 21:01:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:00.020 00:15:00.020 real 0m37.713s 00:15:00.020 user 2m3.876s 00:15:00.020 sys 0m6.882s 00:15:00.020 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:00.020 21:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.020 ************************************ 00:15:00.020 END TEST nvmf_rpc 00:15:00.020 ************************************ 00:15:00.020 21:01:50 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:00.020 21:01:50 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:00.020 21:01:50 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:00.020 21:01:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:00.020 ************************************ 00:15:00.020 START TEST nvmf_invalid 00:15:00.020 ************************************ 00:15:00.020 21:01:50 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:00.280 * Looking for test storage... 00:15:00.280 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.280 21:01:50 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.280 21:01:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:00.280 21:01:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:00.280 21:01:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:00.280 21:01:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:00.280 21:01:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:00.280 21:01:51 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:00.280 21:01:51 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.281 21:01:51 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:06.844 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:06.844 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:06.844 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:06.844 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:06.844 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:06.845 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:06.845 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:06.845 altname enp217s0f0np0 00:15:06.845 altname ens818f0np0 00:15:06.845 inet 192.168.100.8/24 scope global mlx_0_0 00:15:06.845 valid_lft forever preferred_lft forever 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:06.845 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:06.845 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:06.845 altname enp217s0f1np1 00:15:06.845 altname ens818f1np1 00:15:06.845 inet 192.168.100.9/24 scope global mlx_0_1 00:15:06.845 valid_lft forever preferred_lft forever 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:06.845 192.168.100.9' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:06.845 192.168.100.9' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:06.845 192.168.100.9' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3472476 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3472476 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3472476 ']' 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:06.845 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:06.845 [2024-07-13 21:01:57.297656] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:06.845 [2024-07-13 21:01:57.297707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.845 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.845 [2024-07-13 21:01:57.367398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.846 [2024-07-13 21:01:57.407251] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.846 [2024-07-13 21:01:57.407295] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.846 [2024-07-13 21:01:57.407304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.846 [2024-07-13 21:01:57.407312] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.846 [2024-07-13 21:01:57.407319] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.846 [2024-07-13 21:01:57.407366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.846 [2024-07-13 21:01:57.407464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.846 [2024-07-13 21:01:57.407549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.846 [2024-07-13 21:01:57.407551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.846 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:06.846 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:15:06.846 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.846 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.846 21:01:57 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:06.846 21:01:57 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.846 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:06.846 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28497 00:15:06.846 [2024-07-13 21:01:57.726506] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:07.105 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:07.105 { 00:15:07.105 "nqn": "nqn.2016-06.io.spdk:cnode28497", 00:15:07.105 "tgt_name": "foobar", 00:15:07.105 "method": "nvmf_create_subsystem", 00:15:07.105 "req_id": 1 00:15:07.105 } 00:15:07.105 Got JSON-RPC error response 00:15:07.105 response: 00:15:07.105 { 00:15:07.105 "code": -32603, 00:15:07.105 "message": "Unable to find target foobar" 00:15:07.105 }' 00:15:07.105 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:07.105 { 00:15:07.105 "nqn": "nqn.2016-06.io.spdk:cnode28497", 00:15:07.105 "tgt_name": "foobar", 00:15:07.105 "method": "nvmf_create_subsystem", 00:15:07.105 "req_id": 1 00:15:07.105 } 00:15:07.105 Got JSON-RPC error response 00:15:07.105 response: 00:15:07.105 { 00:15:07.105 "code": -32603, 00:15:07.105 "message": "Unable to find target foobar" 00:15:07.105 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:07.105 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:07.105 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7566 00:15:07.105 [2024-07-13 21:01:57.923157] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7566: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:07.105 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:07.105 { 00:15:07.105 "nqn": "nqn.2016-06.io.spdk:cnode7566", 00:15:07.105 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:07.105 "method": "nvmf_create_subsystem", 00:15:07.105 "req_id": 1 00:15:07.105 } 00:15:07.105 Got JSON-RPC error response 00:15:07.105 response: 00:15:07.105 { 00:15:07.105 "code": -32602, 00:15:07.105 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:07.105 }' 00:15:07.105 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:07.105 { 00:15:07.105 "nqn": "nqn.2016-06.io.spdk:cnode7566", 00:15:07.105 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:07.105 "method": "nvmf_create_subsystem", 00:15:07.105 "req_id": 1 00:15:07.105 } 00:15:07.105 Got JSON-RPC error response 00:15:07.105 response: 00:15:07.105 { 00:15:07.105 "code": -32602, 00:15:07.105 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:07.105 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:07.105 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:07.105 21:01:57 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13811 00:15:07.364 [2024-07-13 21:01:58.111779] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13811: invalid model number 'SPDK_Controller' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:07.364 { 00:15:07.364 "nqn": "nqn.2016-06.io.spdk:cnode13811", 00:15:07.364 "model_number": "SPDK_Controller\u001f", 00:15:07.364 "method": "nvmf_create_subsystem", 00:15:07.364 "req_id": 1 00:15:07.364 } 00:15:07.364 Got JSON-RPC error response 00:15:07.364 response: 00:15:07.364 { 00:15:07.364 "code": -32602, 00:15:07.364 "message": "Invalid MN SPDK_Controller\u001f" 00:15:07.364 }' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:07.364 { 00:15:07.364 "nqn": "nqn.2016-06.io.spdk:cnode13811", 00:15:07.364 "model_number": "SPDK_Controller\u001f", 00:15:07.364 "method": "nvmf_create_subsystem", 00:15:07.364 "req_id": 1 00:15:07.364 } 00:15:07.364 Got JSON-RPC error response 00:15:07.364 response: 00:15:07.364 { 00:15:07.364 "code": -32602, 00:15:07.364 "message": "Invalid MN SPDK_Controller\u001f" 00:15:07.364 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.364 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo '9v* !'\''g=)2zK;-],g5CSs' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '9v* !'\''g=)2zK;-],g5CSs' nqn.2016-06.io.spdk:cnode5778 00:15:07.624 [2024-07-13 21:01:58.452934] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5778: invalid serial number '9v* !'g=)2zK;-],g5CSs' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:07.624 { 00:15:07.624 "nqn": "nqn.2016-06.io.spdk:cnode5778", 00:15:07.624 "serial_number": "9v* !'\''g=)2zK;-],g5CSs", 00:15:07.624 "method": "nvmf_create_subsystem", 00:15:07.624 "req_id": 1 00:15:07.624 } 00:15:07.624 Got JSON-RPC error response 00:15:07.624 response: 00:15:07.624 { 00:15:07.624 "code": -32602, 00:15:07.624 "message": "Invalid SN 9v* !'\''g=)2zK;-],g5CSs" 00:15:07.624 }' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:07.624 { 00:15:07.624 "nqn": "nqn.2016-06.io.spdk:cnode5778", 00:15:07.624 "serial_number": "9v* !'g=)2zK;-],g5CSs", 00:15:07.624 "method": "nvmf_create_subsystem", 00:15:07.624 "req_id": 1 00:15:07.624 } 00:15:07.624 Got JSON-RPC error response 00:15:07.624 response: 00:15:07.624 { 00:15:07.624 "code": -32602, 00:15:07.624 "message": "Invalid SN 9v* !'g=)2zK;-],g5CSs" 00:15:07.624 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:07.624 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.884 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:07.885 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ : == \- ]] 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo ':DwQ`LLm=H<:]GV0.9#|e#n k[{ro'\''|}s[fs""VEj' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ':DwQ`LLm=H<:]GV0.9#|e#n k[{ro'\''|}s[fs""VEj' nqn.2016-06.io.spdk:cnode11520 00:15:08.145 [2024-07-13 21:01:58.954541] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11520: invalid model number ':DwQ`LLm=H<:]GV0.9#|e#n k[{ro'|}s[fs""VEj' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:08.145 { 00:15:08.145 "nqn": "nqn.2016-06.io.spdk:cnode11520", 00:15:08.145 "model_number": ":DwQ`LLm=H<:]GV0.9#|e#n k[{ro'\''|}s[fs\"\"VEj", 00:15:08.145 "method": "nvmf_create_subsystem", 00:15:08.145 "req_id": 1 00:15:08.145 } 00:15:08.145 Got JSON-RPC error response 00:15:08.145 response: 00:15:08.145 { 00:15:08.145 "code": -32602, 00:15:08.145 "message": "Invalid MN :DwQ`LLm=H<:]GV0.9#|e#n k[{ro'\''|}s[fs\"\"VEj" 00:15:08.145 }' 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:08.145 { 00:15:08.145 "nqn": "nqn.2016-06.io.spdk:cnode11520", 00:15:08.145 "model_number": ":DwQ`LLm=H<:]GV0.9#|e#n k[{ro'|}s[fs\"\"VEj", 00:15:08.145 "method": "nvmf_create_subsystem", 00:15:08.145 "req_id": 1 00:15:08.145 } 00:15:08.145 Got JSON-RPC error response 00:15:08.145 response: 00:15:08.145 { 00:15:08.145 "code": -32602, 00:15:08.145 "message": "Invalid MN :DwQ`LLm=H<:]GV0.9#|e#n k[{ro'|}s[fs\"\"VEj" 00:15:08.145 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:08.145 21:01:58 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:08.404 [2024-07-13 21:01:59.166141] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1427620/0x142bb10) succeed. 00:15:08.404 [2024-07-13 21:01:59.176364] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1428c60/0x146d1a0) succeed. 00:15:08.662 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:08.662 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:08.662 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:08.662 192.168.100.9' 00:15:08.662 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:08.662 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:08.662 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:08.920 [2024-07-13 21:01:59.668810] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:08.920 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:08.920 { 00:15:08.920 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:08.920 "listen_address": { 00:15:08.920 "trtype": "rdma", 00:15:08.920 "traddr": "192.168.100.8", 00:15:08.920 "trsvcid": "4421" 00:15:08.920 }, 00:15:08.920 "method": "nvmf_subsystem_remove_listener", 00:15:08.920 "req_id": 1 00:15:08.920 } 00:15:08.920 Got JSON-RPC error response 00:15:08.920 response: 00:15:08.920 { 00:15:08.920 "code": -32602, 00:15:08.920 "message": "Invalid parameters" 00:15:08.920 }' 00:15:08.920 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:08.920 { 00:15:08.920 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:08.920 "listen_address": { 00:15:08.920 "trtype": "rdma", 00:15:08.920 "traddr": "192.168.100.8", 00:15:08.920 "trsvcid": "4421" 00:15:08.920 }, 00:15:08.920 "method": "nvmf_subsystem_remove_listener", 00:15:08.920 "req_id": 1 00:15:08.920 } 00:15:08.920 Got JSON-RPC error response 00:15:08.920 response: 00:15:08.920 { 00:15:08.920 "code": -32602, 00:15:08.920 "message": "Invalid parameters" 00:15:08.920 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:08.920 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5526 -i 0 00:15:09.178 [2024-07-13 21:01:59.853461] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5526: invalid cntlid range [0-65519] 00:15:09.178 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:09.178 { 00:15:09.178 "nqn": "nqn.2016-06.io.spdk:cnode5526", 00:15:09.178 "min_cntlid": 0, 00:15:09.178 "method": "nvmf_create_subsystem", 00:15:09.178 "req_id": 1 00:15:09.178 } 00:15:09.178 Got JSON-RPC error response 00:15:09.179 response: 00:15:09.179 { 00:15:09.179 "code": -32602, 00:15:09.179 "message": "Invalid cntlid range [0-65519]" 00:15:09.179 }' 00:15:09.179 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:09.179 { 00:15:09.179 "nqn": "nqn.2016-06.io.spdk:cnode5526", 00:15:09.179 "min_cntlid": 0, 00:15:09.179 "method": "nvmf_create_subsystem", 00:15:09.179 "req_id": 1 00:15:09.179 } 00:15:09.179 Got JSON-RPC error response 00:15:09.179 response: 00:15:09.179 { 00:15:09.179 "code": -32602, 00:15:09.179 "message": "Invalid cntlid range [0-65519]" 00:15:09.179 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:09.179 21:01:59 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18416 -i 65520 00:15:09.179 [2024-07-13 21:02:00.034178] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18416: invalid cntlid range [65520-65519] 00:15:09.179 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:09.179 { 00:15:09.179 "nqn": "nqn.2016-06.io.spdk:cnode18416", 00:15:09.179 "min_cntlid": 65520, 00:15:09.179 "method": "nvmf_create_subsystem", 00:15:09.179 "req_id": 1 00:15:09.179 } 00:15:09.179 Got JSON-RPC error response 00:15:09.179 response: 00:15:09.179 { 00:15:09.179 "code": -32602, 00:15:09.179 "message": "Invalid cntlid range [65520-65519]" 00:15:09.179 }' 00:15:09.179 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:09.179 { 00:15:09.179 "nqn": "nqn.2016-06.io.spdk:cnode18416", 00:15:09.179 "min_cntlid": 65520, 00:15:09.179 "method": "nvmf_create_subsystem", 00:15:09.179 "req_id": 1 00:15:09.179 } 00:15:09.179 Got JSON-RPC error response 00:15:09.179 response: 00:15:09.179 { 00:15:09.179 "code": -32602, 00:15:09.179 "message": "Invalid cntlid range [65520-65519]" 00:15:09.179 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:09.437 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25041 -I 0 00:15:09.437 [2024-07-13 21:02:00.214829] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25041: invalid cntlid range [1-0] 00:15:09.437 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:09.437 { 00:15:09.437 "nqn": "nqn.2016-06.io.spdk:cnode25041", 00:15:09.437 "max_cntlid": 0, 00:15:09.437 "method": "nvmf_create_subsystem", 00:15:09.437 "req_id": 1 00:15:09.437 } 00:15:09.437 Got JSON-RPC error response 00:15:09.437 response: 00:15:09.437 { 00:15:09.437 "code": -32602, 00:15:09.437 "message": "Invalid cntlid range [1-0]" 00:15:09.437 }' 00:15:09.437 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:09.437 { 00:15:09.437 "nqn": "nqn.2016-06.io.spdk:cnode25041", 00:15:09.437 "max_cntlid": 0, 00:15:09.437 "method": "nvmf_create_subsystem", 00:15:09.437 "req_id": 1 00:15:09.437 } 00:15:09.437 Got JSON-RPC error response 00:15:09.437 response: 00:15:09.437 { 00:15:09.437 "code": -32602, 00:15:09.437 "message": "Invalid cntlid range [1-0]" 00:15:09.437 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:09.437 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9837 -I 65520 00:15:09.695 [2024-07-13 21:02:00.399506] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9837: invalid cntlid range [1-65520] 00:15:09.695 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:09.695 { 00:15:09.695 "nqn": "nqn.2016-06.io.spdk:cnode9837", 00:15:09.695 "max_cntlid": 65520, 00:15:09.695 "method": "nvmf_create_subsystem", 00:15:09.695 "req_id": 1 00:15:09.695 } 00:15:09.695 Got JSON-RPC error response 00:15:09.695 response: 00:15:09.695 { 00:15:09.695 "code": -32602, 00:15:09.695 "message": "Invalid cntlid range [1-65520]" 00:15:09.695 }' 00:15:09.695 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:09.695 { 00:15:09.695 "nqn": "nqn.2016-06.io.spdk:cnode9837", 00:15:09.695 "max_cntlid": 65520, 00:15:09.695 "method": "nvmf_create_subsystem", 00:15:09.695 "req_id": 1 00:15:09.695 } 00:15:09.695 Got JSON-RPC error response 00:15:09.695 response: 00:15:09.695 { 00:15:09.695 "code": -32602, 00:15:09.695 "message": "Invalid cntlid range [1-65520]" 00:15:09.695 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:09.695 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11730 -i 6 -I 5 00:15:09.695 [2024-07-13 21:02:00.576181] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11730: invalid cntlid range [6-5] 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:09.954 { 00:15:09.954 "nqn": "nqn.2016-06.io.spdk:cnode11730", 00:15:09.954 "min_cntlid": 6, 00:15:09.954 "max_cntlid": 5, 00:15:09.954 "method": "nvmf_create_subsystem", 00:15:09.954 "req_id": 1 00:15:09.954 } 00:15:09.954 Got JSON-RPC error response 00:15:09.954 response: 00:15:09.954 { 00:15:09.954 "code": -32602, 00:15:09.954 "message": "Invalid cntlid range [6-5]" 00:15:09.954 }' 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:09.954 { 00:15:09.954 "nqn": "nqn.2016-06.io.spdk:cnode11730", 00:15:09.954 "min_cntlid": 6, 00:15:09.954 "max_cntlid": 5, 00:15:09.954 "method": "nvmf_create_subsystem", 00:15:09.954 "req_id": 1 00:15:09.954 } 00:15:09.954 Got JSON-RPC error response 00:15:09.954 response: 00:15:09.954 { 00:15:09.954 "code": -32602, 00:15:09.954 "message": "Invalid cntlid range [6-5]" 00:15:09.954 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:09.954 { 00:15:09.954 "name": "foobar", 00:15:09.954 "method": "nvmf_delete_target", 00:15:09.954 "req_id": 1 00:15:09.954 } 00:15:09.954 Got JSON-RPC error response 00:15:09.954 response: 00:15:09.954 { 00:15:09.954 "code": -32602, 00:15:09.954 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:09.954 }' 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:09.954 { 00:15:09.954 "name": "foobar", 00:15:09.954 "method": "nvmf_delete_target", 00:15:09.954 "req_id": 1 00:15:09.954 } 00:15:09.954 Got JSON-RPC error response 00:15:09.954 response: 00:15:09.954 { 00:15:09.954 "code": -32602, 00:15:09.954 "message": "The specified target doesn't exist, cannot delete it." 00:15:09.954 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:09.954 rmmod nvme_rdma 00:15:09.954 rmmod nvme_fabrics 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3472476 ']' 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3472476 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3472476 ']' 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3472476 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3472476 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3472476' 00:15:09.954 killing process with pid 3472476 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3472476 00:15:09.954 21:02:00 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3472476 00:15:10.212 21:02:01 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.212 21:02:01 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:10.212 00:15:10.212 real 0m10.198s 00:15:10.212 user 0m18.358s 00:15:10.212 sys 0m5.887s 00:15:10.212 21:02:01 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:10.212 21:02:01 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:10.212 ************************************ 00:15:10.212 END TEST nvmf_invalid 00:15:10.212 ************************************ 00:15:10.470 21:02:01 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:10.470 21:02:01 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:10.470 21:02:01 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:10.470 21:02:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:10.470 ************************************ 00:15:10.470 START TEST nvmf_abort 00:15:10.470 ************************************ 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:10.470 * Looking for test storage... 00:15:10.470 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.470 21:02:01 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:17.037 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:17.037 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:17.298 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:17.298 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:17.298 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:17.298 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.299 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:17.299 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:17.299 21:02:07 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:17.299 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.299 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:17.299 altname enp217s0f0np0 00:15:17.299 altname ens818f0np0 00:15:17.299 inet 192.168.100.8/24 scope global mlx_0_0 00:15:17.299 valid_lft forever preferred_lft forever 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:17.299 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.299 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:17.299 altname enp217s0f1np1 00:15:17.299 altname ens818f1np1 00:15:17.299 inet 192.168.100.9/24 scope global mlx_0_1 00:15:17.299 valid_lft forever preferred_lft forever 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:17.299 192.168.100.9' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:17.299 192.168.100.9' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:17.299 192.168.100.9' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:17.299 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.300 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3476602 00:15:17.300 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:17.300 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3476602 00:15:17.300 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3476602 ']' 00:15:17.300 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.300 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:17.300 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.300 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:17.300 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.559 [2024-07-13 21:02:08.211845] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:17.559 [2024-07-13 21:02:08.211897] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.559 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.559 [2024-07-13 21:02:08.281101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:17.559 [2024-07-13 21:02:08.319658] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.559 [2024-07-13 21:02:08.319702] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.559 [2024-07-13 21:02:08.319712] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.559 [2024-07-13 21:02:08.319720] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.559 [2024-07-13 21:02:08.319727] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.559 [2024-07-13 21:02:08.319832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.559 [2024-07-13 21:02:08.319934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.559 [2024-07-13 21:02:08.319936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.560 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:17.560 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:15:17.560 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.560 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.560 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 [2024-07-13 21:02:08.483728] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9fd420/0xa01910) succeed. 00:15:17.819 [2024-07-13 21:02:08.494079] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9fe9c0/0xa42fa0) succeed. 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 Malloc0 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 Delay0 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 [2024-07-13 21:02:08.655726] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.819 21:02:08 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:17.819 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.117 [2024-07-13 21:02:08.749350] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:20.031 Initializing NVMe Controllers 00:15:20.031 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:20.031 controller IO queue size 128 less than required 00:15:20.031 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:20.031 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:20.031 Initialization complete. Launching workers. 00:15:20.031 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 50648 00:15:20.031 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 50709, failed to submit 62 00:15:20.031 success 50649, unsuccess 60, failed 0 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:20.031 rmmod nvme_rdma 00:15:20.031 rmmod nvme_fabrics 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3476602 ']' 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3476602 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3476602 ']' 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3476602 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:20.031 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3476602 00:15:20.299 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:20.299 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:20.299 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3476602' 00:15:20.299 killing process with pid 3476602 00:15:20.299 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3476602 00:15:20.299 21:02:10 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3476602 00:15:20.561 21:02:11 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.561 21:02:11 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:20.561 00:15:20.561 real 0m10.071s 00:15:20.561 user 0m12.470s 00:15:20.561 sys 0m5.789s 00:15:20.561 21:02:11 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:20.561 21:02:11 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:15:20.561 ************************************ 00:15:20.561 END TEST nvmf_abort 00:15:20.561 ************************************ 00:15:20.561 21:02:11 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:20.561 21:02:11 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:20.561 21:02:11 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:20.561 21:02:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:20.561 ************************************ 00:15:20.561 START TEST nvmf_ns_hotplug_stress 00:15:20.561 ************************************ 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:20.561 * Looking for test storage... 00:15:20.561 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.561 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.562 21:02:11 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:27.132 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:27.132 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:27.132 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:27.133 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:27.133 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:27.133 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.133 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:27.133 altname enp217s0f0np0 00:15:27.133 altname ens818f0np0 00:15:27.133 inet 192.168.100.8/24 scope global mlx_0_0 00:15:27.133 valid_lft forever preferred_lft forever 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:27.133 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.133 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:27.133 altname enp217s0f1np1 00:15:27.133 altname ens818f1np1 00:15:27.133 inet 192.168.100.9/24 scope global mlx_0_1 00:15:27.133 valid_lft forever preferred_lft forever 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:27.133 192.168.100.9' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:27.133 192.168.100.9' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:27.133 192.168.100.9' 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:15:27.133 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:27.134 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:27.134 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:27.134 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:27.134 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:27.134 21:02:17 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3480371 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3480371 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3480371 ']' 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:27.134 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.392 [2024-07-13 21:02:18.059953] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:27.392 [2024-07-13 21:02:18.060004] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.392 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.392 [2024-07-13 21:02:18.133971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.392 [2024-07-13 21:02:18.173568] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.392 [2024-07-13 21:02:18.173608] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.392 [2024-07-13 21:02:18.173618] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.392 [2024-07-13 21:02:18.173627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.392 [2024-07-13 21:02:18.173634] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.392 [2024-07-13 21:02:18.173679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.392 [2024-07-13 21:02:18.173751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.392 [2024-07-13 21:02:18.173754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.329 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:28.329 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:15:28.329 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:28.329 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:28.329 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.329 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.329 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:28.329 21:02:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:28.329 [2024-07-13 21:02:19.078997] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e2f420/0x1e33910) succeed. 00:15:28.329 [2024-07-13 21:02:19.089267] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e309c0/0x1e74fa0) succeed. 00:15:28.329 21:02:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:28.587 21:02:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:28.846 [2024-07-13 21:02:19.540301] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:28.846 21:02:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:29.104 21:02:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:29.104 Malloc0 00:15:29.104 21:02:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:29.363 Delay0 00:15:29.363 21:02:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.621 21:02:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:29.621 NULL1 00:15:29.621 21:02:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:29.880 21:02:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3480871 00:15:29.880 21:02:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:29.880 21:02:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:29.880 21:02:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.880 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.258 Read completed with error (sct=0, sc=11) 00:15:31.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.258 21:02:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.258 21:02:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:31.258 21:02:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:31.517 true 00:15:31.517 21:02:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:31.517 21:02:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.455 21:02:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.455 21:02:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:32.455 21:02:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:32.714 true 00:15:32.714 21:02:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:32.714 21:02:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.649 21:02:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:33.650 21:02:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:33.650 21:02:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:33.908 true 00:15:33.908 21:02:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:33.908 21:02:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.846 21:02:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:34.846 21:02:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:34.846 21:02:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:35.104 true 00:15:35.104 21:02:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:35.104 21:02:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.042 21:02:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:36.042 21:02:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:36.042 21:02:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:36.042 true 00:15:36.302 21:02:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:36.302 21:02:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.239 21:02:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:37.239 21:02:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:37.239 21:02:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:37.239 true 00:15:37.498 21:02:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:37.498 21:02:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.498 21:02:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.759 21:02:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:37.759 21:02:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:37.759 true 00:15:38.018 21:02:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:38.018 21:02:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.018 21:02:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.276 21:02:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:38.276 21:02:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:38.536 true 00:15:38.536 21:02:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:38.536 21:02:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.536 21:02:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:38.832 21:02:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:38.832 21:02:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:39.090 true 00:15:39.090 21:02:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:39.090 21:02:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.028 21:02:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:40.288 21:02:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:40.288 21:02:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:40.548 true 00:15:40.548 21:02:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:40.548 21:02:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.486 21:02:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.486 21:02:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:41.486 21:02:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:41.746 true 00:15:41.746 21:02:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:41.746 21:02:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.684 21:02:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.684 21:02:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:42.684 21:02:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:42.942 true 00:15:42.942 21:02:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:42.942 21:02:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.876 21:02:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:43.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:43.876 21:02:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:43.876 21:02:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:44.134 true 00:15:44.134 21:02:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:44.134 21:02:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.081 21:02:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.081 21:02:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:45.081 21:02:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:45.344 true 00:15:45.344 21:02:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:45.344 21:02:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.280 21:02:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.280 21:02:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:46.280 21:02:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:46.539 true 00:15:46.539 21:02:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:46.539 21:02:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.474 21:02:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.474 21:02:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:47.474 21:02:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:47.731 true 00:15:47.731 21:02:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:47.731 21:02:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.665 21:02:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.665 21:02:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:48.665 21:02:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:48.923 true 00:15:48.923 21:02:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:48.923 21:02:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.853 21:02:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:49.853 21:02:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:49.853 21:02:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:50.110 true 00:15:50.110 21:02:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:50.110 21:02:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.041 21:02:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.041 21:02:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:51.041 21:02:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:51.300 true 00:15:51.300 21:02:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:51.300 21:02:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.236 21:02:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.236 21:02:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:52.236 21:02:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:52.494 true 00:15:52.495 21:02:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:52.495 21:02:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.430 21:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.689 21:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:53.689 21:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:53.689 true 00:15:53.689 21:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:53.689 21:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.626 21:02:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.885 21:02:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:54.885 21:02:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:54.885 true 00:15:54.885 21:02:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:54.885 21:02:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.822 21:02:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:55.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:55.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:56.083 21:02:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:56.083 21:02:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:56.083 true 00:15:56.083 21:02:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:56.083 21:02:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.111 21:02:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:57.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.111 21:02:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:57.111 21:02:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:57.370 true 00:15:57.370 21:02:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:57.370 21:02:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.306 21:02:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:58.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.306 21:02:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:58.306 21:02:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:58.566 true 00:15:58.566 21:02:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:15:58.566 21:02:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.502 21:02:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:59.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.761 21:02:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:59.761 21:02:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:59.761 true 00:16:00.020 21:02:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:16:00.020 21:02:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.956 21:02:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:00.956 21:02:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:00.956 21:02:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:00.956 true 00:16:00.956 21:02:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:16:00.956 21:02:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.215 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.474 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:01.474 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:01.474 true 00:16:01.733 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:16:01.733 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.733 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.992 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:01.992 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:02.251 true 00:16:02.251 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:16:02.251 21:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.251 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.533 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:16:02.533 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:02.797 true 00:16:02.797 Initializing NVMe Controllers 00:16:02.797 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.797 Controller IO queue size 128, less than required. 00:16:02.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:02.797 Controller IO queue size 128, less than required. 00:16:02.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:02.797 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:02.797 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:02.797 Initialization complete. Launching workers. 00:16:02.797 ======================================================== 00:16:02.797 Latency(us) 00:16:02.797 Device Information : IOPS MiB/s Average min max 00:16:02.797 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5362.27 2.62 20456.36 841.16 1135028.77 00:16:02.797 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31924.27 15.59 4009.41 1356.20 285846.75 00:16:02.797 ======================================================== 00:16:02.797 Total : 37286.53 18.21 6374.68 841.16 1135028.77 00:16:02.797 00:16:02.797 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:16:02.797 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.797 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:03.055 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:16:03.055 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:16:03.314 true 00:16:03.314 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480871 00:16:03.314 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3480871) - No such process 00:16:03.314 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3480871 00:16:03.314 21:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.314 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:03.573 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:03.573 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:03.573 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:03.573 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:03.573 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:03.832 null0 00:16:03.832 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:03.832 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:03.832 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:03.832 null1 00:16:03.832 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:03.832 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:03.832 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:04.090 null2 00:16:04.090 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.090 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.090 21:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:04.349 null3 00:16:04.349 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.349 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.349 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:04.349 null4 00:16:04.349 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.349 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.349 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:04.608 null5 00:16:04.608 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.608 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.608 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:04.868 null6 00:16:04.868 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:04.868 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:04.868 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:04.868 null7 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:05.128 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3487000 3487002 3487005 3487008 3487011 3487014 3487017 3487020 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:05.129 21:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.388 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.647 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:05.906 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.907 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:05.907 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:05.907 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:05.907 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:05.907 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:05.907 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:05.907 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.166 21:02:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:06.166 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.426 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:06.685 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.685 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.685 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:06.685 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:06.685 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:06.685 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:06.686 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:06.686 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:06.686 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.686 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.686 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.945 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:07.203 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.204 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:07.204 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:07.204 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.204 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.204 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:07.204 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.204 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.204 21:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.204 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.463 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:07.722 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:07.981 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:08.241 21:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:08.501 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:08.502 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:08.502 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:08.502 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:08.502 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.502 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.502 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.502 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:08.502 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:08.762 rmmod nvme_rdma 00:16:08.762 rmmod nvme_fabrics 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3480371 ']' 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3480371 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3480371 ']' 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3480371 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3480371 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3480371' 00:16:08.762 killing process with pid 3480371 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3480371 00:16:08.762 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3480371 00:16:09.021 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:09.021 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:09.021 00:16:09.021 real 0m48.592s 00:16:09.021 user 3m18.461s 00:16:09.021 sys 0m14.353s 00:16:09.021 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:09.021 21:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.021 ************************************ 00:16:09.021 END TEST nvmf_ns_hotplug_stress 00:16:09.021 ************************************ 00:16:09.281 21:02:59 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:09.281 21:02:59 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:09.281 21:02:59 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:09.281 21:02:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:09.281 ************************************ 00:16:09.281 START TEST nvmf_connect_stress 00:16:09.281 ************************************ 00:16:09.281 21:02:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:09.281 * Looking for test storage... 00:16:09.281 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.281 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:09.282 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:09.282 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:09.282 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.282 21:03:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.282 21:03:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.282 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:09.282 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:09.282 21:03:00 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:09.282 21:03:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:15.906 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:15.906 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:15.906 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:15.906 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:15.906 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:15.907 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:15.907 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:15.907 altname enp217s0f0np0 00:16:15.907 altname ens818f0np0 00:16:15.907 inet 192.168.100.8/24 scope global mlx_0_0 00:16:15.907 valid_lft forever preferred_lft forever 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:15.907 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:15.907 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:15.907 altname enp217s0f1np1 00:16:15.907 altname ens818f1np1 00:16:15.907 inet 192.168.100.9/24 scope global mlx_0_1 00:16:15.907 valid_lft forever preferred_lft forever 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:15.907 192.168.100.9' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:15.907 192.168.100.9' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:15.907 192.168.100.9' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3491659 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3491659 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3491659 ']' 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:15.907 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.907 [2024-07-13 21:03:06.632440] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:15.907 [2024-07-13 21:03:06.632493] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.907 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.907 [2024-07-13 21:03:06.705765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.907 [2024-07-13 21:03:06.744983] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.907 [2024-07-13 21:03:06.745029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.907 [2024-07-13 21:03:06.745039] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.907 [2024-07-13 21:03:06.745051] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.907 [2024-07-13 21:03:06.745074] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.907 [2024-07-13 21:03:06.745176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.907 [2024-07-13 21:03:06.745266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.907 [2024-07-13 21:03:06.745267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.193 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:16.193 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:16:16.193 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:16.193 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:16.193 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.193 21:03:06 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.193 21:03:06 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:16.193 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.193 21:03:06 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.193 [2024-07-13 21:03:06.904276] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f0d420/0x1f11910) succeed. 00:16:16.193 [2024-07-13 21:03:06.914404] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f0e9c0/0x1f52fa0) succeed. 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.193 [2024-07-13 21:03:07.020677] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.193 NULL1 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3491748 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.193 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.193 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.452 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.710 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.710 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:16.710 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.710 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.710 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.969 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.969 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:16.969 21:03:07 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.969 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.969 21:03:07 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.537 21:03:08 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.537 21:03:08 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:17.537 21:03:08 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.537 21:03:08 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.537 21:03:08 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.795 21:03:08 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.795 21:03:08 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:17.795 21:03:08 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.795 21:03:08 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.795 21:03:08 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.053 21:03:08 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.053 21:03:08 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:18.053 21:03:08 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.053 21:03:08 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.053 21:03:08 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.312 21:03:09 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.312 21:03:09 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:18.312 21:03:09 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.312 21:03:09 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.312 21:03:09 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.570 21:03:09 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.570 21:03:09 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:18.570 21:03:09 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.570 21:03:09 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.570 21:03:09 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.137 21:03:09 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.137 21:03:09 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:19.137 21:03:09 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.137 21:03:09 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.137 21:03:09 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.396 21:03:10 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.396 21:03:10 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:19.396 21:03:10 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.396 21:03:10 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.396 21:03:10 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.655 21:03:10 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.655 21:03:10 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:19.655 21:03:10 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.655 21:03:10 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.655 21:03:10 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.915 21:03:10 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.915 21:03:10 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:19.915 21:03:10 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.915 21:03:10 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.915 21:03:10 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.174 21:03:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.174 21:03:11 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:20.174 21:03:11 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.174 21:03:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.174 21:03:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.740 21:03:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.740 21:03:11 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:20.740 21:03:11 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.740 21:03:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.740 21:03:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.998 21:03:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.998 21:03:11 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:20.998 21:03:11 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.998 21:03:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.998 21:03:11 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.257 21:03:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.257 21:03:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:21.257 21:03:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.257 21:03:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.257 21:03:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:21.516 21:03:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.516 21:03:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:21.516 21:03:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:21.516 21:03:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.516 21:03:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.085 21:03:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.085 21:03:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:22.085 21:03:12 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.085 21:03:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.085 21:03:12 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.344 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.344 21:03:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:22.344 21:03:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.344 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.344 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.603 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.603 21:03:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:22.603 21:03:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.603 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.603 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:22.861 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.861 21:03:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:22.861 21:03:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:22.861 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.861 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.120 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.120 21:03:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:23.120 21:03:13 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.120 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.120 21:03:13 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.688 21:03:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.688 21:03:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:23.688 21:03:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.688 21:03:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.688 21:03:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.947 21:03:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.947 21:03:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:23.947 21:03:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:23.947 21:03:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.947 21:03:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.206 21:03:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.206 21:03:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:24.206 21:03:14 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.206 21:03:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.206 21:03:14 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:24.466 21:03:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.466 21:03:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:24.466 21:03:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:24.466 21:03:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.466 21:03:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.034 21:03:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.034 21:03:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:25.034 21:03:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.034 21:03:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.034 21:03:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.294 21:03:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.294 21:03:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:25.294 21:03:15 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.294 21:03:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.294 21:03:15 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.553 21:03:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.553 21:03:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:25.553 21:03:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.553 21:03:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.553 21:03:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:25.812 21:03:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.812 21:03:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:25.812 21:03:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:25.812 21:03:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.812 21:03:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:26.071 21:03:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.071 21:03:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:26.071 21:03:16 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:26.071 21:03:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.071 21:03:16 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:26.329 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3491748 00:16:26.588 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3491748) - No such process 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3491748 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:26.588 rmmod nvme_rdma 00:16:26.588 rmmod nvme_fabrics 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3491659 ']' 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3491659 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3491659 ']' 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3491659 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3491659 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3491659' 00:16:26.588 killing process with pid 3491659 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3491659 00:16:26.588 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3491659 00:16:26.847 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.847 21:03:17 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:26.847 00:16:26.847 real 0m17.662s 00:16:26.847 user 0m39.727s 00:16:26.847 sys 0m7.482s 00:16:26.847 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:26.847 21:03:17 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:26.847 ************************************ 00:16:26.847 END TEST nvmf_connect_stress 00:16:26.847 ************************************ 00:16:26.847 21:03:17 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:26.847 21:03:17 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:26.847 21:03:17 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:26.847 21:03:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:26.847 ************************************ 00:16:26.847 START TEST nvmf_fused_ordering 00:16:26.847 ************************************ 00:16:26.847 21:03:17 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:27.105 * Looking for test storage... 00:16:27.105 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:27.105 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:27.106 21:03:17 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.893 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:33.894 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:33.894 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:33.894 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:33.894 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:33.894 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:33.894 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:33.894 altname enp217s0f0np0 00:16:33.894 altname ens818f0np0 00:16:33.894 inet 192.168.100.8/24 scope global mlx_0_0 00:16:33.894 valid_lft forever preferred_lft forever 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:33.894 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:33.894 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:33.894 altname enp217s0f1np1 00:16:33.894 altname ens818f1np1 00:16:33.894 inet 192.168.100.9/24 scope global mlx_0_1 00:16:33.894 valid_lft forever preferred_lft forever 00:16:33.894 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:33.895 192.168.100.9' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:33.895 192.168.100.9' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:33.895 192.168.100.9' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3496784 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3496784 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3496784 ']' 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:33.895 21:03:24 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:33.895 [2024-07-13 21:03:24.583216] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:33.895 [2024-07-13 21:03:24.583272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.895 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.895 [2024-07-13 21:03:24.653168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.895 [2024-07-13 21:03:24.689658] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.895 [2024-07-13 21:03:24.689704] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.895 [2024-07-13 21:03:24.689713] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.895 [2024-07-13 21:03:24.689722] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.895 [2024-07-13 21:03:24.689728] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.895 [2024-07-13 21:03:24.689755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.831 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:34.831 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:16:34.831 21:03:25 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.831 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.831 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.831 21:03:25 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.831 21:03:25 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:34.831 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.831 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.832 [2024-07-13 21:03:25.452180] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8d8e50/0x8dd340) succeed. 00:16:34.832 [2024-07-13 21:03:25.461226] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8da350/0x91e9d0) succeed. 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.832 [2024-07-13 21:03:25.520358] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.832 NULL1 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.832 21:03:25 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:34.832 [2024-07-13 21:03:25.575634] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:34.832 [2024-07-13 21:03:25.575678] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497063 ] 00:16:34.832 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.091 Attached to nqn.2016-06.io.spdk:cnode1 00:16:35.091 Namespace ID: 1 size: 1GB 00:16:35.091 fused_ordering(0) 00:16:35.091 fused_ordering(1) 00:16:35.091 fused_ordering(2) 00:16:35.091 fused_ordering(3) 00:16:35.092 fused_ordering(4) 00:16:35.092 fused_ordering(5) 00:16:35.092 fused_ordering(6) 00:16:35.092 fused_ordering(7) 00:16:35.092 fused_ordering(8) 00:16:35.092 fused_ordering(9) 00:16:35.092 fused_ordering(10) 00:16:35.092 fused_ordering(11) 00:16:35.092 fused_ordering(12) 00:16:35.092 fused_ordering(13) 00:16:35.092 fused_ordering(14) 00:16:35.092 fused_ordering(15) 00:16:35.092 fused_ordering(16) 00:16:35.092 fused_ordering(17) 00:16:35.092 fused_ordering(18) 00:16:35.092 fused_ordering(19) 00:16:35.092 fused_ordering(20) 00:16:35.092 fused_ordering(21) 00:16:35.092 fused_ordering(22) 00:16:35.092 fused_ordering(23) 00:16:35.092 fused_ordering(24) 00:16:35.092 fused_ordering(25) 00:16:35.092 fused_ordering(26) 00:16:35.092 fused_ordering(27) 00:16:35.092 fused_ordering(28) 00:16:35.092 fused_ordering(29) 00:16:35.092 fused_ordering(30) 00:16:35.092 fused_ordering(31) 00:16:35.092 fused_ordering(32) 00:16:35.092 fused_ordering(33) 00:16:35.092 fused_ordering(34) 00:16:35.092 fused_ordering(35) 00:16:35.092 fused_ordering(36) 00:16:35.092 fused_ordering(37) 00:16:35.092 fused_ordering(38) 00:16:35.092 fused_ordering(39) 00:16:35.092 fused_ordering(40) 00:16:35.092 fused_ordering(41) 00:16:35.092 fused_ordering(42) 00:16:35.092 fused_ordering(43) 00:16:35.092 fused_ordering(44) 00:16:35.092 fused_ordering(45) 00:16:35.092 fused_ordering(46) 00:16:35.092 fused_ordering(47) 00:16:35.092 fused_ordering(48) 00:16:35.092 fused_ordering(49) 00:16:35.092 fused_ordering(50) 00:16:35.092 fused_ordering(51) 00:16:35.092 fused_ordering(52) 00:16:35.092 fused_ordering(53) 00:16:35.092 fused_ordering(54) 00:16:35.092 fused_ordering(55) 00:16:35.092 fused_ordering(56) 00:16:35.092 fused_ordering(57) 00:16:35.092 fused_ordering(58) 00:16:35.092 fused_ordering(59) 00:16:35.092 fused_ordering(60) 00:16:35.092 fused_ordering(61) 00:16:35.092 fused_ordering(62) 00:16:35.092 fused_ordering(63) 00:16:35.092 fused_ordering(64) 00:16:35.092 fused_ordering(65) 00:16:35.092 fused_ordering(66) 00:16:35.092 fused_ordering(67) 00:16:35.092 fused_ordering(68) 00:16:35.092 fused_ordering(69) 00:16:35.092 fused_ordering(70) 00:16:35.092 fused_ordering(71) 00:16:35.092 fused_ordering(72) 00:16:35.092 fused_ordering(73) 00:16:35.092 fused_ordering(74) 00:16:35.092 fused_ordering(75) 00:16:35.092 fused_ordering(76) 00:16:35.092 fused_ordering(77) 00:16:35.092 fused_ordering(78) 00:16:35.092 fused_ordering(79) 00:16:35.092 fused_ordering(80) 00:16:35.092 fused_ordering(81) 00:16:35.092 fused_ordering(82) 00:16:35.092 fused_ordering(83) 00:16:35.092 fused_ordering(84) 00:16:35.092 fused_ordering(85) 00:16:35.092 fused_ordering(86) 00:16:35.092 fused_ordering(87) 00:16:35.092 fused_ordering(88) 00:16:35.092 fused_ordering(89) 00:16:35.092 fused_ordering(90) 00:16:35.092 fused_ordering(91) 00:16:35.092 fused_ordering(92) 00:16:35.092 fused_ordering(93) 00:16:35.092 fused_ordering(94) 00:16:35.092 fused_ordering(95) 00:16:35.092 fused_ordering(96) 00:16:35.092 fused_ordering(97) 00:16:35.092 fused_ordering(98) 00:16:35.092 fused_ordering(99) 00:16:35.092 fused_ordering(100) 00:16:35.092 fused_ordering(101) 00:16:35.092 fused_ordering(102) 00:16:35.092 fused_ordering(103) 00:16:35.092 fused_ordering(104) 00:16:35.092 fused_ordering(105) 00:16:35.092 fused_ordering(106) 00:16:35.092 fused_ordering(107) 00:16:35.092 fused_ordering(108) 00:16:35.092 fused_ordering(109) 00:16:35.092 fused_ordering(110) 00:16:35.092 fused_ordering(111) 00:16:35.092 fused_ordering(112) 00:16:35.092 fused_ordering(113) 00:16:35.092 fused_ordering(114) 00:16:35.092 fused_ordering(115) 00:16:35.092 fused_ordering(116) 00:16:35.092 fused_ordering(117) 00:16:35.092 fused_ordering(118) 00:16:35.092 fused_ordering(119) 00:16:35.092 fused_ordering(120) 00:16:35.092 fused_ordering(121) 00:16:35.092 fused_ordering(122) 00:16:35.092 fused_ordering(123) 00:16:35.092 fused_ordering(124) 00:16:35.092 fused_ordering(125) 00:16:35.092 fused_ordering(126) 00:16:35.092 fused_ordering(127) 00:16:35.092 fused_ordering(128) 00:16:35.092 fused_ordering(129) 00:16:35.092 fused_ordering(130) 00:16:35.092 fused_ordering(131) 00:16:35.092 fused_ordering(132) 00:16:35.092 fused_ordering(133) 00:16:35.092 fused_ordering(134) 00:16:35.092 fused_ordering(135) 00:16:35.092 fused_ordering(136) 00:16:35.092 fused_ordering(137) 00:16:35.092 fused_ordering(138) 00:16:35.092 fused_ordering(139) 00:16:35.092 fused_ordering(140) 00:16:35.092 fused_ordering(141) 00:16:35.092 fused_ordering(142) 00:16:35.092 fused_ordering(143) 00:16:35.092 fused_ordering(144) 00:16:35.092 fused_ordering(145) 00:16:35.092 fused_ordering(146) 00:16:35.092 fused_ordering(147) 00:16:35.092 fused_ordering(148) 00:16:35.092 fused_ordering(149) 00:16:35.092 fused_ordering(150) 00:16:35.092 fused_ordering(151) 00:16:35.092 fused_ordering(152) 00:16:35.092 fused_ordering(153) 00:16:35.092 fused_ordering(154) 00:16:35.092 fused_ordering(155) 00:16:35.092 fused_ordering(156) 00:16:35.092 fused_ordering(157) 00:16:35.092 fused_ordering(158) 00:16:35.092 fused_ordering(159) 00:16:35.092 fused_ordering(160) 00:16:35.092 fused_ordering(161) 00:16:35.092 fused_ordering(162) 00:16:35.092 fused_ordering(163) 00:16:35.092 fused_ordering(164) 00:16:35.092 fused_ordering(165) 00:16:35.092 fused_ordering(166) 00:16:35.092 fused_ordering(167) 00:16:35.092 fused_ordering(168) 00:16:35.092 fused_ordering(169) 00:16:35.092 fused_ordering(170) 00:16:35.092 fused_ordering(171) 00:16:35.092 fused_ordering(172) 00:16:35.092 fused_ordering(173) 00:16:35.092 fused_ordering(174) 00:16:35.092 fused_ordering(175) 00:16:35.092 fused_ordering(176) 00:16:35.092 fused_ordering(177) 00:16:35.092 fused_ordering(178) 00:16:35.092 fused_ordering(179) 00:16:35.092 fused_ordering(180) 00:16:35.092 fused_ordering(181) 00:16:35.092 fused_ordering(182) 00:16:35.092 fused_ordering(183) 00:16:35.092 fused_ordering(184) 00:16:35.092 fused_ordering(185) 00:16:35.092 fused_ordering(186) 00:16:35.092 fused_ordering(187) 00:16:35.092 fused_ordering(188) 00:16:35.092 fused_ordering(189) 00:16:35.092 fused_ordering(190) 00:16:35.092 fused_ordering(191) 00:16:35.092 fused_ordering(192) 00:16:35.092 fused_ordering(193) 00:16:35.092 fused_ordering(194) 00:16:35.092 fused_ordering(195) 00:16:35.092 fused_ordering(196) 00:16:35.092 fused_ordering(197) 00:16:35.092 fused_ordering(198) 00:16:35.092 fused_ordering(199) 00:16:35.092 fused_ordering(200) 00:16:35.092 fused_ordering(201) 00:16:35.092 fused_ordering(202) 00:16:35.092 fused_ordering(203) 00:16:35.092 fused_ordering(204) 00:16:35.092 fused_ordering(205) 00:16:35.092 fused_ordering(206) 00:16:35.092 fused_ordering(207) 00:16:35.092 fused_ordering(208) 00:16:35.092 fused_ordering(209) 00:16:35.092 fused_ordering(210) 00:16:35.092 fused_ordering(211) 00:16:35.092 fused_ordering(212) 00:16:35.092 fused_ordering(213) 00:16:35.092 fused_ordering(214) 00:16:35.092 fused_ordering(215) 00:16:35.092 fused_ordering(216) 00:16:35.092 fused_ordering(217) 00:16:35.092 fused_ordering(218) 00:16:35.092 fused_ordering(219) 00:16:35.092 fused_ordering(220) 00:16:35.092 fused_ordering(221) 00:16:35.092 fused_ordering(222) 00:16:35.092 fused_ordering(223) 00:16:35.092 fused_ordering(224) 00:16:35.092 fused_ordering(225) 00:16:35.092 fused_ordering(226) 00:16:35.092 fused_ordering(227) 00:16:35.092 fused_ordering(228) 00:16:35.092 fused_ordering(229) 00:16:35.092 fused_ordering(230) 00:16:35.092 fused_ordering(231) 00:16:35.092 fused_ordering(232) 00:16:35.092 fused_ordering(233) 00:16:35.092 fused_ordering(234) 00:16:35.092 fused_ordering(235) 00:16:35.092 fused_ordering(236) 00:16:35.092 fused_ordering(237) 00:16:35.092 fused_ordering(238) 00:16:35.092 fused_ordering(239) 00:16:35.092 fused_ordering(240) 00:16:35.092 fused_ordering(241) 00:16:35.092 fused_ordering(242) 00:16:35.092 fused_ordering(243) 00:16:35.092 fused_ordering(244) 00:16:35.092 fused_ordering(245) 00:16:35.092 fused_ordering(246) 00:16:35.092 fused_ordering(247) 00:16:35.092 fused_ordering(248) 00:16:35.092 fused_ordering(249) 00:16:35.092 fused_ordering(250) 00:16:35.092 fused_ordering(251) 00:16:35.092 fused_ordering(252) 00:16:35.092 fused_ordering(253) 00:16:35.092 fused_ordering(254) 00:16:35.092 fused_ordering(255) 00:16:35.092 fused_ordering(256) 00:16:35.092 fused_ordering(257) 00:16:35.092 fused_ordering(258) 00:16:35.092 fused_ordering(259) 00:16:35.092 fused_ordering(260) 00:16:35.092 fused_ordering(261) 00:16:35.092 fused_ordering(262) 00:16:35.092 fused_ordering(263) 00:16:35.092 fused_ordering(264) 00:16:35.092 fused_ordering(265) 00:16:35.092 fused_ordering(266) 00:16:35.092 fused_ordering(267) 00:16:35.092 fused_ordering(268) 00:16:35.092 fused_ordering(269) 00:16:35.092 fused_ordering(270) 00:16:35.092 fused_ordering(271) 00:16:35.092 fused_ordering(272) 00:16:35.092 fused_ordering(273) 00:16:35.092 fused_ordering(274) 00:16:35.092 fused_ordering(275) 00:16:35.092 fused_ordering(276) 00:16:35.092 fused_ordering(277) 00:16:35.092 fused_ordering(278) 00:16:35.092 fused_ordering(279) 00:16:35.092 fused_ordering(280) 00:16:35.092 fused_ordering(281) 00:16:35.092 fused_ordering(282) 00:16:35.092 fused_ordering(283) 00:16:35.092 fused_ordering(284) 00:16:35.092 fused_ordering(285) 00:16:35.092 fused_ordering(286) 00:16:35.092 fused_ordering(287) 00:16:35.093 fused_ordering(288) 00:16:35.093 fused_ordering(289) 00:16:35.093 fused_ordering(290) 00:16:35.093 fused_ordering(291) 00:16:35.093 fused_ordering(292) 00:16:35.093 fused_ordering(293) 00:16:35.093 fused_ordering(294) 00:16:35.093 fused_ordering(295) 00:16:35.093 fused_ordering(296) 00:16:35.093 fused_ordering(297) 00:16:35.093 fused_ordering(298) 00:16:35.093 fused_ordering(299) 00:16:35.093 fused_ordering(300) 00:16:35.093 fused_ordering(301) 00:16:35.093 fused_ordering(302) 00:16:35.093 fused_ordering(303) 00:16:35.093 fused_ordering(304) 00:16:35.093 fused_ordering(305) 00:16:35.093 fused_ordering(306) 00:16:35.093 fused_ordering(307) 00:16:35.093 fused_ordering(308) 00:16:35.093 fused_ordering(309) 00:16:35.093 fused_ordering(310) 00:16:35.093 fused_ordering(311) 00:16:35.093 fused_ordering(312) 00:16:35.093 fused_ordering(313) 00:16:35.093 fused_ordering(314) 00:16:35.093 fused_ordering(315) 00:16:35.093 fused_ordering(316) 00:16:35.093 fused_ordering(317) 00:16:35.093 fused_ordering(318) 00:16:35.093 fused_ordering(319) 00:16:35.093 fused_ordering(320) 00:16:35.093 fused_ordering(321) 00:16:35.093 fused_ordering(322) 00:16:35.093 fused_ordering(323) 00:16:35.093 fused_ordering(324) 00:16:35.093 fused_ordering(325) 00:16:35.093 fused_ordering(326) 00:16:35.093 fused_ordering(327) 00:16:35.093 fused_ordering(328) 00:16:35.093 fused_ordering(329) 00:16:35.093 fused_ordering(330) 00:16:35.093 fused_ordering(331) 00:16:35.093 fused_ordering(332) 00:16:35.093 fused_ordering(333) 00:16:35.093 fused_ordering(334) 00:16:35.093 fused_ordering(335) 00:16:35.093 fused_ordering(336) 00:16:35.093 fused_ordering(337) 00:16:35.093 fused_ordering(338) 00:16:35.093 fused_ordering(339) 00:16:35.093 fused_ordering(340) 00:16:35.093 fused_ordering(341) 00:16:35.093 fused_ordering(342) 00:16:35.093 fused_ordering(343) 00:16:35.093 fused_ordering(344) 00:16:35.093 fused_ordering(345) 00:16:35.093 fused_ordering(346) 00:16:35.093 fused_ordering(347) 00:16:35.093 fused_ordering(348) 00:16:35.093 fused_ordering(349) 00:16:35.093 fused_ordering(350) 00:16:35.093 fused_ordering(351) 00:16:35.093 fused_ordering(352) 00:16:35.093 fused_ordering(353) 00:16:35.093 fused_ordering(354) 00:16:35.093 fused_ordering(355) 00:16:35.093 fused_ordering(356) 00:16:35.093 fused_ordering(357) 00:16:35.093 fused_ordering(358) 00:16:35.093 fused_ordering(359) 00:16:35.093 fused_ordering(360) 00:16:35.093 fused_ordering(361) 00:16:35.093 fused_ordering(362) 00:16:35.093 fused_ordering(363) 00:16:35.093 fused_ordering(364) 00:16:35.093 fused_ordering(365) 00:16:35.093 fused_ordering(366) 00:16:35.093 fused_ordering(367) 00:16:35.093 fused_ordering(368) 00:16:35.093 fused_ordering(369) 00:16:35.093 fused_ordering(370) 00:16:35.093 fused_ordering(371) 00:16:35.093 fused_ordering(372) 00:16:35.093 fused_ordering(373) 00:16:35.093 fused_ordering(374) 00:16:35.093 fused_ordering(375) 00:16:35.093 fused_ordering(376) 00:16:35.093 fused_ordering(377) 00:16:35.093 fused_ordering(378) 00:16:35.093 fused_ordering(379) 00:16:35.093 fused_ordering(380) 00:16:35.093 fused_ordering(381) 00:16:35.093 fused_ordering(382) 00:16:35.093 fused_ordering(383) 00:16:35.093 fused_ordering(384) 00:16:35.093 fused_ordering(385) 00:16:35.093 fused_ordering(386) 00:16:35.093 fused_ordering(387) 00:16:35.093 fused_ordering(388) 00:16:35.093 fused_ordering(389) 00:16:35.093 fused_ordering(390) 00:16:35.093 fused_ordering(391) 00:16:35.093 fused_ordering(392) 00:16:35.093 fused_ordering(393) 00:16:35.093 fused_ordering(394) 00:16:35.093 fused_ordering(395) 00:16:35.093 fused_ordering(396) 00:16:35.093 fused_ordering(397) 00:16:35.093 fused_ordering(398) 00:16:35.093 fused_ordering(399) 00:16:35.093 fused_ordering(400) 00:16:35.093 fused_ordering(401) 00:16:35.093 fused_ordering(402) 00:16:35.093 fused_ordering(403) 00:16:35.093 fused_ordering(404) 00:16:35.093 fused_ordering(405) 00:16:35.093 fused_ordering(406) 00:16:35.093 fused_ordering(407) 00:16:35.093 fused_ordering(408) 00:16:35.093 fused_ordering(409) 00:16:35.093 fused_ordering(410) 00:16:35.093 fused_ordering(411) 00:16:35.093 fused_ordering(412) 00:16:35.093 fused_ordering(413) 00:16:35.093 fused_ordering(414) 00:16:35.093 fused_ordering(415) 00:16:35.093 fused_ordering(416) 00:16:35.093 fused_ordering(417) 00:16:35.093 fused_ordering(418) 00:16:35.093 fused_ordering(419) 00:16:35.093 fused_ordering(420) 00:16:35.093 fused_ordering(421) 00:16:35.093 fused_ordering(422) 00:16:35.093 fused_ordering(423) 00:16:35.093 fused_ordering(424) 00:16:35.093 fused_ordering(425) 00:16:35.093 fused_ordering(426) 00:16:35.093 fused_ordering(427) 00:16:35.093 fused_ordering(428) 00:16:35.093 fused_ordering(429) 00:16:35.093 fused_ordering(430) 00:16:35.093 fused_ordering(431) 00:16:35.093 fused_ordering(432) 00:16:35.093 fused_ordering(433) 00:16:35.093 fused_ordering(434) 00:16:35.093 fused_ordering(435) 00:16:35.093 fused_ordering(436) 00:16:35.093 fused_ordering(437) 00:16:35.093 fused_ordering(438) 00:16:35.093 fused_ordering(439) 00:16:35.093 fused_ordering(440) 00:16:35.093 fused_ordering(441) 00:16:35.093 fused_ordering(442) 00:16:35.093 fused_ordering(443) 00:16:35.093 fused_ordering(444) 00:16:35.093 fused_ordering(445) 00:16:35.093 fused_ordering(446) 00:16:35.093 fused_ordering(447) 00:16:35.093 fused_ordering(448) 00:16:35.093 fused_ordering(449) 00:16:35.093 fused_ordering(450) 00:16:35.093 fused_ordering(451) 00:16:35.093 fused_ordering(452) 00:16:35.093 fused_ordering(453) 00:16:35.093 fused_ordering(454) 00:16:35.093 fused_ordering(455) 00:16:35.093 fused_ordering(456) 00:16:35.093 fused_ordering(457) 00:16:35.093 fused_ordering(458) 00:16:35.093 fused_ordering(459) 00:16:35.093 fused_ordering(460) 00:16:35.093 fused_ordering(461) 00:16:35.093 fused_ordering(462) 00:16:35.093 fused_ordering(463) 00:16:35.093 fused_ordering(464) 00:16:35.093 fused_ordering(465) 00:16:35.093 fused_ordering(466) 00:16:35.093 fused_ordering(467) 00:16:35.093 fused_ordering(468) 00:16:35.093 fused_ordering(469) 00:16:35.093 fused_ordering(470) 00:16:35.093 fused_ordering(471) 00:16:35.093 fused_ordering(472) 00:16:35.093 fused_ordering(473) 00:16:35.093 fused_ordering(474) 00:16:35.093 fused_ordering(475) 00:16:35.093 fused_ordering(476) 00:16:35.093 fused_ordering(477) 00:16:35.093 fused_ordering(478) 00:16:35.093 fused_ordering(479) 00:16:35.093 fused_ordering(480) 00:16:35.093 fused_ordering(481) 00:16:35.093 fused_ordering(482) 00:16:35.093 fused_ordering(483) 00:16:35.093 fused_ordering(484) 00:16:35.093 fused_ordering(485) 00:16:35.093 fused_ordering(486) 00:16:35.093 fused_ordering(487) 00:16:35.093 fused_ordering(488) 00:16:35.093 fused_ordering(489) 00:16:35.093 fused_ordering(490) 00:16:35.093 fused_ordering(491) 00:16:35.093 fused_ordering(492) 00:16:35.093 fused_ordering(493) 00:16:35.093 fused_ordering(494) 00:16:35.093 fused_ordering(495) 00:16:35.093 fused_ordering(496) 00:16:35.093 fused_ordering(497) 00:16:35.093 fused_ordering(498) 00:16:35.093 fused_ordering(499) 00:16:35.093 fused_ordering(500) 00:16:35.093 fused_ordering(501) 00:16:35.093 fused_ordering(502) 00:16:35.093 fused_ordering(503) 00:16:35.093 fused_ordering(504) 00:16:35.093 fused_ordering(505) 00:16:35.093 fused_ordering(506) 00:16:35.093 fused_ordering(507) 00:16:35.093 fused_ordering(508) 00:16:35.093 fused_ordering(509) 00:16:35.093 fused_ordering(510) 00:16:35.093 fused_ordering(511) 00:16:35.093 fused_ordering(512) 00:16:35.093 fused_ordering(513) 00:16:35.093 fused_ordering(514) 00:16:35.093 fused_ordering(515) 00:16:35.093 fused_ordering(516) 00:16:35.093 fused_ordering(517) 00:16:35.093 fused_ordering(518) 00:16:35.093 fused_ordering(519) 00:16:35.093 fused_ordering(520) 00:16:35.093 fused_ordering(521) 00:16:35.093 fused_ordering(522) 00:16:35.093 fused_ordering(523) 00:16:35.093 fused_ordering(524) 00:16:35.093 fused_ordering(525) 00:16:35.093 fused_ordering(526) 00:16:35.093 fused_ordering(527) 00:16:35.093 fused_ordering(528) 00:16:35.093 fused_ordering(529) 00:16:35.093 fused_ordering(530) 00:16:35.093 fused_ordering(531) 00:16:35.093 fused_ordering(532) 00:16:35.093 fused_ordering(533) 00:16:35.093 fused_ordering(534) 00:16:35.093 fused_ordering(535) 00:16:35.093 fused_ordering(536) 00:16:35.093 fused_ordering(537) 00:16:35.093 fused_ordering(538) 00:16:35.093 fused_ordering(539) 00:16:35.093 fused_ordering(540) 00:16:35.093 fused_ordering(541) 00:16:35.093 fused_ordering(542) 00:16:35.093 fused_ordering(543) 00:16:35.093 fused_ordering(544) 00:16:35.093 fused_ordering(545) 00:16:35.093 fused_ordering(546) 00:16:35.093 fused_ordering(547) 00:16:35.093 fused_ordering(548) 00:16:35.093 fused_ordering(549) 00:16:35.093 fused_ordering(550) 00:16:35.093 fused_ordering(551) 00:16:35.093 fused_ordering(552) 00:16:35.093 fused_ordering(553) 00:16:35.093 fused_ordering(554) 00:16:35.093 fused_ordering(555) 00:16:35.093 fused_ordering(556) 00:16:35.093 fused_ordering(557) 00:16:35.093 fused_ordering(558) 00:16:35.093 fused_ordering(559) 00:16:35.093 fused_ordering(560) 00:16:35.093 fused_ordering(561) 00:16:35.093 fused_ordering(562) 00:16:35.093 fused_ordering(563) 00:16:35.093 fused_ordering(564) 00:16:35.093 fused_ordering(565) 00:16:35.093 fused_ordering(566) 00:16:35.093 fused_ordering(567) 00:16:35.093 fused_ordering(568) 00:16:35.093 fused_ordering(569) 00:16:35.093 fused_ordering(570) 00:16:35.093 fused_ordering(571) 00:16:35.093 fused_ordering(572) 00:16:35.093 fused_ordering(573) 00:16:35.093 fused_ordering(574) 00:16:35.093 fused_ordering(575) 00:16:35.093 fused_ordering(576) 00:16:35.094 fused_ordering(577) 00:16:35.094 fused_ordering(578) 00:16:35.094 fused_ordering(579) 00:16:35.094 fused_ordering(580) 00:16:35.094 fused_ordering(581) 00:16:35.094 fused_ordering(582) 00:16:35.094 fused_ordering(583) 00:16:35.094 fused_ordering(584) 00:16:35.094 fused_ordering(585) 00:16:35.094 fused_ordering(586) 00:16:35.094 fused_ordering(587) 00:16:35.094 fused_ordering(588) 00:16:35.094 fused_ordering(589) 00:16:35.094 fused_ordering(590) 00:16:35.094 fused_ordering(591) 00:16:35.094 fused_ordering(592) 00:16:35.094 fused_ordering(593) 00:16:35.094 fused_ordering(594) 00:16:35.094 fused_ordering(595) 00:16:35.094 fused_ordering(596) 00:16:35.094 fused_ordering(597) 00:16:35.094 fused_ordering(598) 00:16:35.094 fused_ordering(599) 00:16:35.094 fused_ordering(600) 00:16:35.094 fused_ordering(601) 00:16:35.094 fused_ordering(602) 00:16:35.094 fused_ordering(603) 00:16:35.094 fused_ordering(604) 00:16:35.094 fused_ordering(605) 00:16:35.094 fused_ordering(606) 00:16:35.094 fused_ordering(607) 00:16:35.094 fused_ordering(608) 00:16:35.094 fused_ordering(609) 00:16:35.094 fused_ordering(610) 00:16:35.094 fused_ordering(611) 00:16:35.094 fused_ordering(612) 00:16:35.094 fused_ordering(613) 00:16:35.094 fused_ordering(614) 00:16:35.094 fused_ordering(615) 00:16:35.356 fused_ordering(616) 00:16:35.356 fused_ordering(617) 00:16:35.356 fused_ordering(618) 00:16:35.356 fused_ordering(619) 00:16:35.356 fused_ordering(620) 00:16:35.356 fused_ordering(621) 00:16:35.356 fused_ordering(622) 00:16:35.356 fused_ordering(623) 00:16:35.356 fused_ordering(624) 00:16:35.356 fused_ordering(625) 00:16:35.356 fused_ordering(626) 00:16:35.356 fused_ordering(627) 00:16:35.356 fused_ordering(628) 00:16:35.356 fused_ordering(629) 00:16:35.356 fused_ordering(630) 00:16:35.356 fused_ordering(631) 00:16:35.356 fused_ordering(632) 00:16:35.356 fused_ordering(633) 00:16:35.356 fused_ordering(634) 00:16:35.356 fused_ordering(635) 00:16:35.356 fused_ordering(636) 00:16:35.356 fused_ordering(637) 00:16:35.356 fused_ordering(638) 00:16:35.356 fused_ordering(639) 00:16:35.356 fused_ordering(640) 00:16:35.356 fused_ordering(641) 00:16:35.356 fused_ordering(642) 00:16:35.356 fused_ordering(643) 00:16:35.356 fused_ordering(644) 00:16:35.356 fused_ordering(645) 00:16:35.356 fused_ordering(646) 00:16:35.356 fused_ordering(647) 00:16:35.356 fused_ordering(648) 00:16:35.356 fused_ordering(649) 00:16:35.356 fused_ordering(650) 00:16:35.356 fused_ordering(651) 00:16:35.356 fused_ordering(652) 00:16:35.356 fused_ordering(653) 00:16:35.356 fused_ordering(654) 00:16:35.356 fused_ordering(655) 00:16:35.356 fused_ordering(656) 00:16:35.356 fused_ordering(657) 00:16:35.356 fused_ordering(658) 00:16:35.356 fused_ordering(659) 00:16:35.356 fused_ordering(660) 00:16:35.356 fused_ordering(661) 00:16:35.356 fused_ordering(662) 00:16:35.356 fused_ordering(663) 00:16:35.356 fused_ordering(664) 00:16:35.356 fused_ordering(665) 00:16:35.356 fused_ordering(666) 00:16:35.356 fused_ordering(667) 00:16:35.356 fused_ordering(668) 00:16:35.356 fused_ordering(669) 00:16:35.356 fused_ordering(670) 00:16:35.356 fused_ordering(671) 00:16:35.356 fused_ordering(672) 00:16:35.356 fused_ordering(673) 00:16:35.356 fused_ordering(674) 00:16:35.356 fused_ordering(675) 00:16:35.356 fused_ordering(676) 00:16:35.356 fused_ordering(677) 00:16:35.356 fused_ordering(678) 00:16:35.356 fused_ordering(679) 00:16:35.356 fused_ordering(680) 00:16:35.356 fused_ordering(681) 00:16:35.356 fused_ordering(682) 00:16:35.356 fused_ordering(683) 00:16:35.356 fused_ordering(684) 00:16:35.356 fused_ordering(685) 00:16:35.356 fused_ordering(686) 00:16:35.356 fused_ordering(687) 00:16:35.356 fused_ordering(688) 00:16:35.356 fused_ordering(689) 00:16:35.356 fused_ordering(690) 00:16:35.356 fused_ordering(691) 00:16:35.356 fused_ordering(692) 00:16:35.356 fused_ordering(693) 00:16:35.356 fused_ordering(694) 00:16:35.356 fused_ordering(695) 00:16:35.356 fused_ordering(696) 00:16:35.356 fused_ordering(697) 00:16:35.356 fused_ordering(698) 00:16:35.356 fused_ordering(699) 00:16:35.356 fused_ordering(700) 00:16:35.356 fused_ordering(701) 00:16:35.356 fused_ordering(702) 00:16:35.356 fused_ordering(703) 00:16:35.356 fused_ordering(704) 00:16:35.356 fused_ordering(705) 00:16:35.356 fused_ordering(706) 00:16:35.356 fused_ordering(707) 00:16:35.356 fused_ordering(708) 00:16:35.356 fused_ordering(709) 00:16:35.356 fused_ordering(710) 00:16:35.356 fused_ordering(711) 00:16:35.356 fused_ordering(712) 00:16:35.356 fused_ordering(713) 00:16:35.356 fused_ordering(714) 00:16:35.356 fused_ordering(715) 00:16:35.356 fused_ordering(716) 00:16:35.356 fused_ordering(717) 00:16:35.356 fused_ordering(718) 00:16:35.356 fused_ordering(719) 00:16:35.356 fused_ordering(720) 00:16:35.356 fused_ordering(721) 00:16:35.356 fused_ordering(722) 00:16:35.356 fused_ordering(723) 00:16:35.356 fused_ordering(724) 00:16:35.356 fused_ordering(725) 00:16:35.356 fused_ordering(726) 00:16:35.356 fused_ordering(727) 00:16:35.356 fused_ordering(728) 00:16:35.356 fused_ordering(729) 00:16:35.356 fused_ordering(730) 00:16:35.356 fused_ordering(731) 00:16:35.356 fused_ordering(732) 00:16:35.356 fused_ordering(733) 00:16:35.356 fused_ordering(734) 00:16:35.356 fused_ordering(735) 00:16:35.356 fused_ordering(736) 00:16:35.356 fused_ordering(737) 00:16:35.356 fused_ordering(738) 00:16:35.356 fused_ordering(739) 00:16:35.356 fused_ordering(740) 00:16:35.356 fused_ordering(741) 00:16:35.356 fused_ordering(742) 00:16:35.356 fused_ordering(743) 00:16:35.356 fused_ordering(744) 00:16:35.356 fused_ordering(745) 00:16:35.356 fused_ordering(746) 00:16:35.356 fused_ordering(747) 00:16:35.356 fused_ordering(748) 00:16:35.356 fused_ordering(749) 00:16:35.356 fused_ordering(750) 00:16:35.356 fused_ordering(751) 00:16:35.356 fused_ordering(752) 00:16:35.356 fused_ordering(753) 00:16:35.356 fused_ordering(754) 00:16:35.356 fused_ordering(755) 00:16:35.356 fused_ordering(756) 00:16:35.356 fused_ordering(757) 00:16:35.356 fused_ordering(758) 00:16:35.356 fused_ordering(759) 00:16:35.356 fused_ordering(760) 00:16:35.356 fused_ordering(761) 00:16:35.356 fused_ordering(762) 00:16:35.356 fused_ordering(763) 00:16:35.356 fused_ordering(764) 00:16:35.356 fused_ordering(765) 00:16:35.356 fused_ordering(766) 00:16:35.356 fused_ordering(767) 00:16:35.356 fused_ordering(768) 00:16:35.356 fused_ordering(769) 00:16:35.356 fused_ordering(770) 00:16:35.356 fused_ordering(771) 00:16:35.356 fused_ordering(772) 00:16:35.356 fused_ordering(773) 00:16:35.356 fused_ordering(774) 00:16:35.356 fused_ordering(775) 00:16:35.356 fused_ordering(776) 00:16:35.356 fused_ordering(777) 00:16:35.356 fused_ordering(778) 00:16:35.356 fused_ordering(779) 00:16:35.356 fused_ordering(780) 00:16:35.356 fused_ordering(781) 00:16:35.356 fused_ordering(782) 00:16:35.356 fused_ordering(783) 00:16:35.356 fused_ordering(784) 00:16:35.356 fused_ordering(785) 00:16:35.356 fused_ordering(786) 00:16:35.356 fused_ordering(787) 00:16:35.356 fused_ordering(788) 00:16:35.356 fused_ordering(789) 00:16:35.356 fused_ordering(790) 00:16:35.356 fused_ordering(791) 00:16:35.356 fused_ordering(792) 00:16:35.356 fused_ordering(793) 00:16:35.356 fused_ordering(794) 00:16:35.356 fused_ordering(795) 00:16:35.356 fused_ordering(796) 00:16:35.356 fused_ordering(797) 00:16:35.356 fused_ordering(798) 00:16:35.356 fused_ordering(799) 00:16:35.356 fused_ordering(800) 00:16:35.356 fused_ordering(801) 00:16:35.356 fused_ordering(802) 00:16:35.356 fused_ordering(803) 00:16:35.356 fused_ordering(804) 00:16:35.356 fused_ordering(805) 00:16:35.356 fused_ordering(806) 00:16:35.356 fused_ordering(807) 00:16:35.356 fused_ordering(808) 00:16:35.356 fused_ordering(809) 00:16:35.356 fused_ordering(810) 00:16:35.356 fused_ordering(811) 00:16:35.356 fused_ordering(812) 00:16:35.356 fused_ordering(813) 00:16:35.356 fused_ordering(814) 00:16:35.356 fused_ordering(815) 00:16:35.356 fused_ordering(816) 00:16:35.356 fused_ordering(817) 00:16:35.356 fused_ordering(818) 00:16:35.356 fused_ordering(819) 00:16:35.356 fused_ordering(820) 00:16:35.356 fused_ordering(821) 00:16:35.356 fused_ordering(822) 00:16:35.356 fused_ordering(823) 00:16:35.356 fused_ordering(824) 00:16:35.356 fused_ordering(825) 00:16:35.356 fused_ordering(826) 00:16:35.356 fused_ordering(827) 00:16:35.356 fused_ordering(828) 00:16:35.356 fused_ordering(829) 00:16:35.356 fused_ordering(830) 00:16:35.356 fused_ordering(831) 00:16:35.356 fused_ordering(832) 00:16:35.356 fused_ordering(833) 00:16:35.356 fused_ordering(834) 00:16:35.356 fused_ordering(835) 00:16:35.356 fused_ordering(836) 00:16:35.356 fused_ordering(837) 00:16:35.356 fused_ordering(838) 00:16:35.356 fused_ordering(839) 00:16:35.356 fused_ordering(840) 00:16:35.356 fused_ordering(841) 00:16:35.356 fused_ordering(842) 00:16:35.356 fused_ordering(843) 00:16:35.356 fused_ordering(844) 00:16:35.356 fused_ordering(845) 00:16:35.356 fused_ordering(846) 00:16:35.356 fused_ordering(847) 00:16:35.356 fused_ordering(848) 00:16:35.356 fused_ordering(849) 00:16:35.356 fused_ordering(850) 00:16:35.356 fused_ordering(851) 00:16:35.356 fused_ordering(852) 00:16:35.356 fused_ordering(853) 00:16:35.356 fused_ordering(854) 00:16:35.356 fused_ordering(855) 00:16:35.356 fused_ordering(856) 00:16:35.356 fused_ordering(857) 00:16:35.356 fused_ordering(858) 00:16:35.356 fused_ordering(859) 00:16:35.356 fused_ordering(860) 00:16:35.356 fused_ordering(861) 00:16:35.356 fused_ordering(862) 00:16:35.356 fused_ordering(863) 00:16:35.356 fused_ordering(864) 00:16:35.356 fused_ordering(865) 00:16:35.356 fused_ordering(866) 00:16:35.356 fused_ordering(867) 00:16:35.356 fused_ordering(868) 00:16:35.356 fused_ordering(869) 00:16:35.356 fused_ordering(870) 00:16:35.356 fused_ordering(871) 00:16:35.356 fused_ordering(872) 00:16:35.356 fused_ordering(873) 00:16:35.356 fused_ordering(874) 00:16:35.356 fused_ordering(875) 00:16:35.356 fused_ordering(876) 00:16:35.356 fused_ordering(877) 00:16:35.356 fused_ordering(878) 00:16:35.356 fused_ordering(879) 00:16:35.356 fused_ordering(880) 00:16:35.356 fused_ordering(881) 00:16:35.357 fused_ordering(882) 00:16:35.357 fused_ordering(883) 00:16:35.357 fused_ordering(884) 00:16:35.357 fused_ordering(885) 00:16:35.357 fused_ordering(886) 00:16:35.357 fused_ordering(887) 00:16:35.357 fused_ordering(888) 00:16:35.357 fused_ordering(889) 00:16:35.357 fused_ordering(890) 00:16:35.357 fused_ordering(891) 00:16:35.357 fused_ordering(892) 00:16:35.357 fused_ordering(893) 00:16:35.357 fused_ordering(894) 00:16:35.357 fused_ordering(895) 00:16:35.357 fused_ordering(896) 00:16:35.357 fused_ordering(897) 00:16:35.357 fused_ordering(898) 00:16:35.357 fused_ordering(899) 00:16:35.357 fused_ordering(900) 00:16:35.357 fused_ordering(901) 00:16:35.357 fused_ordering(902) 00:16:35.357 fused_ordering(903) 00:16:35.357 fused_ordering(904) 00:16:35.357 fused_ordering(905) 00:16:35.357 fused_ordering(906) 00:16:35.357 fused_ordering(907) 00:16:35.357 fused_ordering(908) 00:16:35.357 fused_ordering(909) 00:16:35.357 fused_ordering(910) 00:16:35.357 fused_ordering(911) 00:16:35.357 fused_ordering(912) 00:16:35.357 fused_ordering(913) 00:16:35.357 fused_ordering(914) 00:16:35.357 fused_ordering(915) 00:16:35.357 fused_ordering(916) 00:16:35.357 fused_ordering(917) 00:16:35.357 fused_ordering(918) 00:16:35.357 fused_ordering(919) 00:16:35.357 fused_ordering(920) 00:16:35.357 fused_ordering(921) 00:16:35.357 fused_ordering(922) 00:16:35.357 fused_ordering(923) 00:16:35.357 fused_ordering(924) 00:16:35.357 fused_ordering(925) 00:16:35.357 fused_ordering(926) 00:16:35.357 fused_ordering(927) 00:16:35.357 fused_ordering(928) 00:16:35.357 fused_ordering(929) 00:16:35.357 fused_ordering(930) 00:16:35.357 fused_ordering(931) 00:16:35.357 fused_ordering(932) 00:16:35.357 fused_ordering(933) 00:16:35.357 fused_ordering(934) 00:16:35.357 fused_ordering(935) 00:16:35.357 fused_ordering(936) 00:16:35.357 fused_ordering(937) 00:16:35.357 fused_ordering(938) 00:16:35.357 fused_ordering(939) 00:16:35.357 fused_ordering(940) 00:16:35.357 fused_ordering(941) 00:16:35.357 fused_ordering(942) 00:16:35.357 fused_ordering(943) 00:16:35.357 fused_ordering(944) 00:16:35.357 fused_ordering(945) 00:16:35.357 fused_ordering(946) 00:16:35.357 fused_ordering(947) 00:16:35.357 fused_ordering(948) 00:16:35.357 fused_ordering(949) 00:16:35.357 fused_ordering(950) 00:16:35.357 fused_ordering(951) 00:16:35.357 fused_ordering(952) 00:16:35.357 fused_ordering(953) 00:16:35.357 fused_ordering(954) 00:16:35.357 fused_ordering(955) 00:16:35.357 fused_ordering(956) 00:16:35.357 fused_ordering(957) 00:16:35.357 fused_ordering(958) 00:16:35.357 fused_ordering(959) 00:16:35.357 fused_ordering(960) 00:16:35.357 fused_ordering(961) 00:16:35.357 fused_ordering(962) 00:16:35.357 fused_ordering(963) 00:16:35.357 fused_ordering(964) 00:16:35.357 fused_ordering(965) 00:16:35.357 fused_ordering(966) 00:16:35.357 fused_ordering(967) 00:16:35.357 fused_ordering(968) 00:16:35.357 fused_ordering(969) 00:16:35.357 fused_ordering(970) 00:16:35.357 fused_ordering(971) 00:16:35.357 fused_ordering(972) 00:16:35.357 fused_ordering(973) 00:16:35.357 fused_ordering(974) 00:16:35.357 fused_ordering(975) 00:16:35.357 fused_ordering(976) 00:16:35.357 fused_ordering(977) 00:16:35.357 fused_ordering(978) 00:16:35.357 fused_ordering(979) 00:16:35.357 fused_ordering(980) 00:16:35.357 fused_ordering(981) 00:16:35.357 fused_ordering(982) 00:16:35.357 fused_ordering(983) 00:16:35.357 fused_ordering(984) 00:16:35.357 fused_ordering(985) 00:16:35.357 fused_ordering(986) 00:16:35.357 fused_ordering(987) 00:16:35.357 fused_ordering(988) 00:16:35.357 fused_ordering(989) 00:16:35.357 fused_ordering(990) 00:16:35.357 fused_ordering(991) 00:16:35.357 fused_ordering(992) 00:16:35.357 fused_ordering(993) 00:16:35.357 fused_ordering(994) 00:16:35.357 fused_ordering(995) 00:16:35.357 fused_ordering(996) 00:16:35.357 fused_ordering(997) 00:16:35.357 fused_ordering(998) 00:16:35.357 fused_ordering(999) 00:16:35.357 fused_ordering(1000) 00:16:35.357 fused_ordering(1001) 00:16:35.357 fused_ordering(1002) 00:16:35.357 fused_ordering(1003) 00:16:35.357 fused_ordering(1004) 00:16:35.357 fused_ordering(1005) 00:16:35.357 fused_ordering(1006) 00:16:35.357 fused_ordering(1007) 00:16:35.357 fused_ordering(1008) 00:16:35.357 fused_ordering(1009) 00:16:35.357 fused_ordering(1010) 00:16:35.357 fused_ordering(1011) 00:16:35.357 fused_ordering(1012) 00:16:35.357 fused_ordering(1013) 00:16:35.357 fused_ordering(1014) 00:16:35.357 fused_ordering(1015) 00:16:35.357 fused_ordering(1016) 00:16:35.357 fused_ordering(1017) 00:16:35.357 fused_ordering(1018) 00:16:35.357 fused_ordering(1019) 00:16:35.357 fused_ordering(1020) 00:16:35.357 fused_ordering(1021) 00:16:35.357 fused_ordering(1022) 00:16:35.357 fused_ordering(1023) 00:16:35.357 21:03:26 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:35.357 21:03:26 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:35.357 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.357 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:35.689 rmmod nvme_rdma 00:16:35.689 rmmod nvme_fabrics 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3496784 ']' 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3496784 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3496784 ']' 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3496784 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3496784 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3496784' 00:16:35.689 killing process with pid 3496784 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3496784 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3496784 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:35.689 00:16:35.689 real 0m8.853s 00:16:35.689 user 0m4.675s 00:16:35.689 sys 0m5.518s 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:35.689 21:03:26 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:35.689 ************************************ 00:16:35.689 END TEST nvmf_fused_ordering 00:16:35.689 ************************************ 00:16:35.949 21:03:26 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:35.949 21:03:26 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:35.949 21:03:26 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:35.949 21:03:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:35.949 ************************************ 00:16:35.949 START TEST nvmf_delete_subsystem 00:16:35.949 ************************************ 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:35.949 * Looking for test storage... 00:16:35.949 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.949 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:35.950 21:03:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.520 21:03:32 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:42.520 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:42.520 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:42.521 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:42.521 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:42.521 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:42.521 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:42.521 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:42.521 altname enp217s0f0np0 00:16:42.521 altname ens818f0np0 00:16:42.521 inet 192.168.100.8/24 scope global mlx_0_0 00:16:42.521 valid_lft forever preferred_lft forever 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:42.521 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:42.521 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:42.521 altname enp217s0f1np1 00:16:42.521 altname ens818f1np1 00:16:42.521 inet 192.168.100.9/24 scope global mlx_0_1 00:16:42.521 valid_lft forever preferred_lft forever 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:42.521 192.168.100.9' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:42.521 192.168.100.9' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:42.521 192.168.100.9' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:42.521 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.522 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3500434 00:16:42.522 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:42.522 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3500434 00:16:42.522 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3500434 ']' 00:16:42.522 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.522 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:42.522 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.522 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:42.522 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.522 [2024-07-13 21:03:33.275868] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:42.522 [2024-07-13 21:03:33.275919] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.522 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.522 [2024-07-13 21:03:33.344717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:42.522 [2024-07-13 21:03:33.383509] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.522 [2024-07-13 21:03:33.383552] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.522 [2024-07-13 21:03:33.383562] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.522 [2024-07-13 21:03:33.383571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.522 [2024-07-13 21:03:33.383578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.522 [2024-07-13 21:03:33.383633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.522 [2024-07-13 21:03:33.383636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.781 [2024-07-13 21:03:33.541766] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x910630/0x914b20) succeed. 00:16:42.781 [2024-07-13 21:03:33.550736] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x911b30/0x9561b0) succeed. 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.781 [2024-07-13 21:03:33.642161] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.781 NULL1 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:42.781 Delay0 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.781 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:43.040 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.040 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3500507 00:16:43.040 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:43.040 21:03:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:43.040 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.040 [2024-07-13 21:03:33.752007] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:44.943 21:03:35 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.943 21:03:35 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.943 21:03:35 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:46.319 NVMe io qpair process completion error 00:16:46.319 NVMe io qpair process completion error 00:16:46.319 NVMe io qpair process completion error 00:16:46.319 NVMe io qpair process completion error 00:16:46.319 NVMe io qpair process completion error 00:16:46.319 NVMe io qpair process completion error 00:16:46.319 21:03:36 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.319 21:03:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:46.319 21:03:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3500507 00:16:46.319 21:03:36 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:46.578 21:03:37 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:46.578 21:03:37 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3500507 00:16:46.578 21:03:37 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Write completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Write completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Write completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Write completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Write completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.146 Read completed with error (sct=0, sc=8) 00:16:47.146 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 starting I/O failed: -6 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Read completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.147 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Write completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Read completed with error (sct=0, sc=8) 00:16:47.148 Initializing NVMe Controllers 00:16:47.148 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:47.148 Controller IO queue size 128, less than required. 00:16:47.148 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:47.148 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:47.148 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:47.148 Initialization complete. Launching workers. 00:16:47.148 ======================================================== 00:16:47.148 Latency(us) 00:16:47.148 Device Information : IOPS MiB/s Average min max 00:16:47.148 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.38 0.04 1595403.32 1000122.32 2981784.26 00:16:47.148 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.38 0.04 1596987.30 1001313.93 2983283.36 00:16:47.148 ======================================================== 00:16:47.148 Total : 160.75 0.08 1596195.31 1000122.32 2983283.36 00:16:47.148 00:16:47.148 21:03:37 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:47.148 21:03:37 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3500507 00:16:47.148 21:03:37 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:47.148 [2024-07-13 21:03:37.850186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:47.148 [2024-07-13 21:03:37.850230] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:47.148 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3500507 00:16:47.717 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3500507) - No such process 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3500507 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3500507 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3500507 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.717 [2024-07-13 21:03:38.372332] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3501311 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:47.717 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:47.717 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.717 [2024-07-13 21:03:38.455491] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:48.285 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:48.285 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:48.285 21:03:38 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:48.544 21:03:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:48.544 21:03:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:48.544 21:03:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:49.113 21:03:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:49.113 21:03:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:49.113 21:03:39 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:49.681 21:03:40 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:49.681 21:03:40 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:49.681 21:03:40 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:50.250 21:03:40 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:50.250 21:03:40 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:50.250 21:03:40 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:50.816 21:03:41 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:50.816 21:03:41 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:50.816 21:03:41 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:51.074 21:03:41 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:51.074 21:03:41 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:51.074 21:03:41 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:51.642 21:03:42 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:51.642 21:03:42 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:51.642 21:03:42 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:52.210 21:03:42 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:52.210 21:03:42 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:52.210 21:03:42 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:52.789 21:03:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:52.789 21:03:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:52.789 21:03:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:53.356 21:03:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:53.356 21:03:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:53.356 21:03:43 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:53.615 21:03:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:53.615 21:03:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:53.615 21:03:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:54.182 21:03:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.182 21:03:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:54.182 21:03:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:54.749 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:54.749 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:54.749 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:54.749 Initializing NVMe Controllers 00:16:54.749 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.749 Controller IO queue size 128, less than required. 00:16:54.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.749 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:54.749 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:54.749 Initialization complete. Launching workers. 00:16:54.749 ======================================================== 00:16:54.750 Latency(us) 00:16:54.750 Device Information : IOPS MiB/s Average min max 00:16:54.750 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001387.85 1000061.55 1004257.52 00:16:54.750 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002902.81 1000534.21 1005788.92 00:16:54.750 ======================================================== 00:16:54.750 Total : 256.00 0.12 1002145.33 1000061.55 1005788.92 00:16:54.750 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3501311 00:16:55.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3501311) - No such process 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3501311 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.317 21:03:45 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:55.317 rmmod nvme_rdma 00:16:55.317 rmmod nvme_fabrics 00:16:55.317 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.317 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:55.317 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3500434 ']' 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3500434 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3500434 ']' 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3500434 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3500434 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3500434' 00:16:55.318 killing process with pid 3500434 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3500434 00:16:55.318 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3500434 00:16:55.576 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.576 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:55.576 00:16:55.576 real 0m19.673s 00:16:55.576 user 0m48.691s 00:16:55.576 sys 0m6.216s 00:16:55.576 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:55.576 21:03:46 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:55.576 ************************************ 00:16:55.576 END TEST nvmf_delete_subsystem 00:16:55.576 ************************************ 00:16:55.576 21:03:46 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:16:55.576 21:03:46 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:55.576 21:03:46 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:55.576 21:03:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:55.576 ************************************ 00:16:55.576 START TEST nvmf_ns_masking 00:16:55.576 ************************************ 00:16:55.576 21:03:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:16:55.835 * Looking for test storage... 00:16:55.835 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=216e0c9e-f22d-472e-bd97-8a4fd2d99bd1 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:55.835 21:03:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.458 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:02.459 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:02.459 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:02.459 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:02.459 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:02.459 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:02.459 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:02.459 altname enp217s0f0np0 00:17:02.459 altname ens818f0np0 00:17:02.459 inet 192.168.100.8/24 scope global mlx_0_0 00:17:02.459 valid_lft forever preferred_lft forever 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:02.459 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:02.459 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:02.459 altname enp217s0f1np1 00:17:02.459 altname ens818f1np1 00:17:02.459 inet 192.168.100.9/24 scope global mlx_0_1 00:17:02.459 valid_lft forever preferred_lft forever 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:02.459 192.168.100.9' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:02.459 192.168.100.9' 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:02.459 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:02.460 192.168.100.9' 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3505815 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3505815 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3505815 ']' 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:02.460 21:03:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.460 [2024-07-13 21:03:52.615706] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:02.460 [2024-07-13 21:03:52.615762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.460 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.460 [2024-07-13 21:03:52.687733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.460 [2024-07-13 21:03:52.729788] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.460 [2024-07-13 21:03:52.729828] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.460 [2024-07-13 21:03:52.729838] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.460 [2024-07-13 21:03:52.729846] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.460 [2024-07-13 21:03:52.729853] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.460 [2024-07-13 21:03:52.729900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.460 [2024-07-13 21:03:52.729919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.460 [2024-07-13 21:03:52.730131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.460 [2024-07-13 21:03:52.730133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.718 21:03:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.718 21:03:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:17:02.718 21:03:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.718 21:03:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.718 21:03:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:02.718 21:03:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.718 21:03:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:02.976 [2024-07-13 21:03:53.645129] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13b1c80/0x13b6170) succeed. 00:17:02.976 [2024-07-13 21:03:53.655469] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13b32c0/0x13f7800) succeed. 00:17:02.976 21:03:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:17:02.976 21:03:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:17:02.976 21:03:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:03.234 Malloc1 00:17:03.234 21:03:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:03.493 Malloc2 00:17:03.493 21:03:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:03.493 21:03:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:03.752 21:03:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:04.010 [2024-07-13 21:03:54.674532] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:04.010 21:03:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:17:04.010 21:03:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 216e0c9e-f22d-472e-bd97-8a4fd2d99bd1 -a 192.168.100.8 -s 4420 -i 4 00:17:04.268 21:03:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:17:04.268 21:03:54 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:17:04.268 21:03:54 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:04.268 21:03:54 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:17:04.268 21:03:54 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:17:06.169 21:03:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:06.169 21:03:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:06.169 21:03:56 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.169 21:03:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:17:06.169 21:03:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.169 21:03:57 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:17:06.169 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:17:06.169 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:06.428 [ 0]:0x1 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bc397b8fa246485ea2c9778dbc1dac47 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bc397b8fa246485ea2c9778dbc1dac47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:06.428 [ 0]:0x1 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:06.428 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bc397b8fa246485ea2c9778dbc1dac47 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bc397b8fa246485ea2c9778dbc1dac47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:17:06.708 [ 1]:0x2 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8a5ef25a467a4e4890f6b030af26ff23 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8a5ef25a467a4e4890f6b030af26ff23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:17:06.708 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:06.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.966 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.224 21:03:57 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:07.224 21:03:58 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:17:07.224 21:03:58 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 216e0c9e-f22d-472e-bd97-8a4fd2d99bd1 -a 192.168.100.8 -s 4420 -i 4 00:17:07.790 21:03:58 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:07.790 21:03:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:17:07.790 21:03:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:07.790 21:03:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:17:07.790 21:03:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:17:07.790 21:03:58 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:17:09.694 [ 0]:0x2 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8a5ef25a467a4e4890f6b030af26ff23 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8a5ef25a467a4e4890f6b030af26ff23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.694 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:09.953 [ 0]:0x1 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bc397b8fa246485ea2c9778dbc1dac47 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bc397b8fa246485ea2c9778dbc1dac47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:17:09.953 [ 1]:0x2 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8a5ef25a467a4e4890f6b030af26ff23 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8a5ef25a467a4e4890f6b030af26ff23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.953 21:04:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:10.212 [ 0]:0x2 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:10.212 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:10.471 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8a5ef25a467a4e4890f6b030af26ff23 00:17:10.471 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8a5ef25a467a4e4890f6b030af26ff23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:10.471 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:17:10.471 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:10.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.730 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:10.730 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:17:10.730 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 216e0c9e-f22d-472e-bd97-8a4fd2d99bd1 -a 192.168.100.8 -s 4420 -i 4 00:17:11.299 21:04:01 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:11.299 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:17:11.299 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.299 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:17:11.299 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:17:11.299 21:04:01 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:17:13.202 21:04:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:13.203 [ 0]:0x1 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:13.203 21:04:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=bc397b8fa246485ea2c9778dbc1dac47 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ bc397b8fa246485ea2c9778dbc1dac47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:17:13.203 [ 1]:0x2 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8a5ef25a467a4e4890f6b030af26ff23 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8a5ef25a467a4e4890f6b030af26ff23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.203 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:13.461 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:17:13.462 [ 0]:0x2 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:13.462 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8a5ef25a467a4e4890f6b030af26ff23 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8a5ef25a467a4e4890f6b030af26ff23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:17:13.720 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:13.720 [2024-07-13 21:04:04.524026] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:13.720 request: 00:17:13.720 { 00:17:13.720 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.720 "nsid": 2, 00:17:13.720 "host": "nqn.2016-06.io.spdk:host1", 00:17:13.720 "method": "nvmf_ns_remove_host", 00:17:13.720 "req_id": 1 00:17:13.720 } 00:17:13.721 Got JSON-RPC error response 00:17:13.721 response: 00:17:13.721 { 00:17:13.721 "code": -32602, 00:17:13.721 "message": "Invalid parameters" 00:17:13.721 } 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:13.721 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:13.979 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:17:13.980 [ 0]:0x2 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8a5ef25a467a4e4890f6b030af26ff23 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8a5ef25a467a4e4890f6b030af26ff23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:17:13.980 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:14.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.238 21:04:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:14.497 rmmod nvme_rdma 00:17:14.497 rmmod nvme_fabrics 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3505815 ']' 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3505815 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3505815 ']' 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3505815 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3505815 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3505815' 00:17:14.497 killing process with pid 3505815 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3505815 00:17:14.497 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3505815 00:17:14.757 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.757 21:04:05 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:14.757 00:17:14.757 real 0m19.161s 00:17:14.757 user 0m55.443s 00:17:14.757 sys 0m6.217s 00:17:14.757 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:14.757 21:04:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:14.757 ************************************ 00:17:14.757 END TEST nvmf_ns_masking 00:17:14.757 ************************************ 00:17:14.757 21:04:05 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:17:14.757 21:04:05 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:14.757 21:04:05 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:14.757 21:04:05 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:14.757 21:04:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:14.757 ************************************ 00:17:14.757 START TEST nvmf_nvme_cli 00:17:14.757 ************************************ 00:17:14.757 21:04:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:15.016 * Looking for test storage... 00:17:15.016 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.016 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.017 21:04:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.619 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:21.620 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:21.620 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:21.620 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:21.620 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:21.620 21:04:11 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:21.620 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:21.620 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:21.620 altname enp217s0f0np0 00:17:21.620 altname ens818f0np0 00:17:21.620 inet 192.168.100.8/24 scope global mlx_0_0 00:17:21.620 valid_lft forever preferred_lft forever 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:21.620 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:21.620 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:21.620 altname enp217s0f1np1 00:17:21.620 altname ens818f1np1 00:17:21.620 inet 192.168.100.9/24 scope global mlx_0_1 00:17:21.620 valid_lft forever preferred_lft forever 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:21.620 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:21.621 192.168.100.9' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:21.621 192.168.100.9' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:21.621 192.168.100.9' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3511291 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3511291 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3511291 ']' 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:21.621 21:04:12 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:21.621 [2024-07-13 21:04:12.232756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:21.621 [2024-07-13 21:04:12.232806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.621 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.621 [2024-07-13 21:04:12.302283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.621 [2024-07-13 21:04:12.341977] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.621 [2024-07-13 21:04:12.342025] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.621 [2024-07-13 21:04:12.342034] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.621 [2024-07-13 21:04:12.342043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.621 [2024-07-13 21:04:12.342050] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.621 [2024-07-13 21:04:12.342100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.621 [2024-07-13 21:04:12.342196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.621 [2024-07-13 21:04:12.342283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.621 [2024-07-13 21:04:12.342284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.189 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:22.189 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:17:22.189 21:04:13 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.189 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.189 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 [2024-07-13 21:04:13.115589] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a6ac80/0x1a6f170) succeed. 00:17:22.448 [2024-07-13 21:04:13.125910] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a6c2c0/0x1ab0800) succeed. 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 Malloc0 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 Malloc1 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 [2024-07-13 21:04:13.318671] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.448 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:22.705 00:17:22.705 Discovery Log Number of Records 2, Generation counter 2 00:17:22.705 =====Discovery Log Entry 0====== 00:17:22.705 trtype: rdma 00:17:22.705 adrfam: ipv4 00:17:22.705 subtype: current discovery subsystem 00:17:22.705 treq: not required 00:17:22.705 portid: 0 00:17:22.705 trsvcid: 4420 00:17:22.705 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:22.705 traddr: 192.168.100.8 00:17:22.705 eflags: explicit discovery connections, duplicate discovery information 00:17:22.705 rdma_prtype: not specified 00:17:22.705 rdma_qptype: connected 00:17:22.705 rdma_cms: rdma-cm 00:17:22.705 rdma_pkey: 0x0000 00:17:22.706 =====Discovery Log Entry 1====== 00:17:22.706 trtype: rdma 00:17:22.706 adrfam: ipv4 00:17:22.706 subtype: nvme subsystem 00:17:22.706 treq: not required 00:17:22.706 portid: 0 00:17:22.706 trsvcid: 4420 00:17:22.706 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:22.706 traddr: 192.168.100.8 00:17:22.706 eflags: none 00:17:22.706 rdma_prtype: not specified 00:17:22.706 rdma_qptype: connected 00:17:22.706 rdma_cms: rdma-cm 00:17:22.706 rdma_pkey: 0x0000 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:22.706 21:04:13 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:23.643 21:04:14 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:23.643 21:04:14 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:17:23.643 21:04:14 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:23.643 21:04:14 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:17:23.643 21:04:14 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:17:23.643 21:04:14 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.545 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:25.804 /dev/nvme0n1 ]] 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:25.804 21:04:16 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:26.740 rmmod nvme_rdma 00:17:26.740 rmmod nvme_fabrics 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3511291 ']' 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3511291 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3511291 ']' 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3511291 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3511291 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3511291' 00:17:26.740 killing process with pid 3511291 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3511291 00:17:26.740 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3511291 00:17:26.999 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.000 21:04:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:27.000 00:17:27.000 real 0m12.243s 00:17:27.000 user 0m23.684s 00:17:27.000 sys 0m5.493s 00:17:27.000 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:27.000 21:04:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:27.000 ************************************ 00:17:27.000 END TEST nvmf_nvme_cli 00:17:27.000 ************************************ 00:17:27.259 21:04:17 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:17:27.259 21:04:17 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:27.259 21:04:17 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:27.259 21:04:17 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:27.259 21:04:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:27.259 ************************************ 00:17:27.259 START TEST nvmf_host_management 00:17:27.259 ************************************ 00:17:27.259 21:04:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:27.259 * Looking for test storage... 00:17:27.259 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:27.259 21:04:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.833 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:33.834 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:33.834 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:33.834 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:33.834 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:33.834 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.834 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:33.834 altname enp217s0f0np0 00:17:33.834 altname ens818f0np0 00:17:33.834 inet 192.168.100.8/24 scope global mlx_0_0 00:17:33.834 valid_lft forever preferred_lft forever 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:33.834 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.834 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:33.834 altname enp217s0f1np1 00:17:33.834 altname ens818f1np1 00:17:33.834 inet 192.168.100.9/24 scope global mlx_0_1 00:17:33.834 valid_lft forever preferred_lft forever 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:17:33.834 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:33.835 21:04:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:33.835 192.168.100.9' 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:33.835 192.168.100.9' 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:33.835 192.168.100.9' 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3515527 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3515527 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3515527 ']' 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.835 [2024-07-13 21:04:24.082919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:33.835 [2024-07-13 21:04:24.082972] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.835 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.835 [2024-07-13 21:04:24.153444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.835 [2024-07-13 21:04:24.193188] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.835 [2024-07-13 21:04:24.193227] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.835 [2024-07-13 21:04:24.193237] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.835 [2024-07-13 21:04:24.193246] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.835 [2024-07-13 21:04:24.193253] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.835 [2024-07-13 21:04:24.193354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.835 [2024-07-13 21:04:24.193457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.835 [2024-07-13 21:04:24.193566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.835 [2024-07-13 21:04:24.193568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.835 [2024-07-13 21:04:24.361790] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe1ff70/0xe24460) succeed. 00:17:33.835 [2024-07-13 21:04:24.372161] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe215b0/0xe65af0) succeed. 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.835 Malloc0 00:17:33.835 [2024-07-13 21:04:24.548826] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3515598 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3515598 /var/tmp/bdevperf.sock 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3515598 ']' 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:33.835 { 00:17:33.835 "params": { 00:17:33.835 "name": "Nvme$subsystem", 00:17:33.835 "trtype": "$TEST_TRANSPORT", 00:17:33.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:33.835 "adrfam": "ipv4", 00:17:33.835 "trsvcid": "$NVMF_PORT", 00:17:33.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:33.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:33.835 "hdgst": ${hdgst:-false}, 00:17:33.835 "ddgst": ${ddgst:-false} 00:17:33.835 }, 00:17:33.835 "method": "bdev_nvme_attach_controller" 00:17:33.835 } 00:17:33.835 EOF 00:17:33.835 )") 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:33.835 21:04:24 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:33.835 "params": { 00:17:33.835 "name": "Nvme0", 00:17:33.835 "trtype": "rdma", 00:17:33.835 "traddr": "192.168.100.8", 00:17:33.835 "adrfam": "ipv4", 00:17:33.835 "trsvcid": "4420", 00:17:33.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:33.835 "hdgst": false, 00:17:33.835 "ddgst": false 00:17:33.835 }, 00:17:33.835 "method": "bdev_nvme_attach_controller" 00:17:33.835 }' 00:17:33.835 [2024-07-13 21:04:24.653334] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:33.836 [2024-07-13 21:04:24.653388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515598 ] 00:17:33.836 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.095 [2024-07-13 21:04:24.726798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.095 [2024-07-13 21:04:24.765464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.095 Running I/O for 10 seconds... 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1705 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1705 -ge 100 ']' 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.664 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.924 21:04:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.924 21:04:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:35.863 [2024-07-13 21:04:26.551766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:17:35.863 [2024-07-13 21:04:26.551802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.863 [2024-07-13 21:04:26.551821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:17:35.863 [2024-07-13 21:04:26.551831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.863 [2024-07-13 21:04:26.551842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:17:35.863 [2024-07-13 21:04:26.551852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.863 [2024-07-13 21:04:26.551862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:17:35.863 [2024-07-13 21:04:26.551872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.863 [2024-07-13 21:04:26.551883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:17:35.863 [2024-07-13 21:04:26.551892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.863 [2024-07-13 21:04:26.551902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:17:35.863 [2024-07-13 21:04:26.551912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.863 [2024-07-13 21:04:26.551922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:17:35.863 [2024-07-13 21:04:26.551931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.863 [2024-07-13 21:04:26.551942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:17:35.863 [2024-07-13 21:04:26.551951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.863 [2024-07-13 21:04:26.551962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:17:35.864 [2024-07-13 21:04:26.551971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.551981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:17:35.864 [2024-07-13 21:04:26.551990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:17:35.864 [2024-07-13 21:04:26.552010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:17:35.864 [2024-07-13 21:04:26.552036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:17:35.864 [2024-07-13 21:04:26.552056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:17:35.864 [2024-07-13 21:04:26.552075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:17:35.864 [2024-07-13 21:04:26.552356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:17:35.864 [2024-07-13 21:04:26.552556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000deed000 len:0x10000 key:0x182400 00:17:35.864 [2024-07-13 21:04:26.552577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.864 [2024-07-13 21:04:26.552588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000decc000 len:0x10000 key:0x182400 00:17:35.864 [2024-07-13 21:04:26.552596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e53e000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e51d000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4fc000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb8f000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb6e000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb4d000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb2c000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb0b000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eaea000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eac9000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eaa8000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea87000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea66000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e268000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e289000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbf6000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc17000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea45000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.552986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea24000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.552995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.553006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea03000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.553018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.553029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e9e2000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.553038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.553048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e9c1000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.553057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.553068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e9a0000 len:0x10000 key:0x182400 00:17:35.865 [2024-07-13 21:04:26.553077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:674c0000 sqhd:52d0 p:0 m:0 dnr:0 00:17:35.865 [2024-07-13 21:04:26.555046] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:17:35.865 [2024-07-13 21:04:26.555927] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:35.865 task offset: 106496 on job bdev=Nvme0n1 fails 00:17:35.865 00:17:35.865 Latency(us) 00:17:35.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.865 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:35.865 Job: Nvme0n1 ended in about 1.61 seconds with error 00:17:35.865 Verification LBA range: start 0x0 length 0x400 00:17:35.865 Nvme0n1 : 1.61 1135.28 70.96 39.70 0.00 53958.94 2110.26 1020054.73 00:17:35.865 =================================================================================================================== 00:17:35.865 Total : 1135.28 70.96 39.70 0.00 53958.94 2110.26 1020054.73 00:17:35.865 [2024-07-13 21:04:26.557463] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:35.865 21:04:26 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3515598 00:17:35.865 21:04:26 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:35.865 21:04:26 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:35.865 21:04:26 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:35.865 21:04:26 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:35.865 21:04:26 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:35.865 21:04:26 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:35.865 21:04:26 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:35.865 { 00:17:35.865 "params": { 00:17:35.865 "name": "Nvme$subsystem", 00:17:35.865 "trtype": "$TEST_TRANSPORT", 00:17:35.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.865 "adrfam": "ipv4", 00:17:35.865 "trsvcid": "$NVMF_PORT", 00:17:35.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.866 "hdgst": ${hdgst:-false}, 00:17:35.866 "ddgst": ${ddgst:-false} 00:17:35.866 }, 00:17:35.866 "method": "bdev_nvme_attach_controller" 00:17:35.866 } 00:17:35.866 EOF 00:17:35.866 )") 00:17:35.866 21:04:26 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:35.866 21:04:26 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:35.866 21:04:26 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:35.866 21:04:26 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:35.866 "params": { 00:17:35.866 "name": "Nvme0", 00:17:35.866 "trtype": "rdma", 00:17:35.866 "traddr": "192.168.100.8", 00:17:35.866 "adrfam": "ipv4", 00:17:35.866 "trsvcid": "4420", 00:17:35.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:35.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:35.866 "hdgst": false, 00:17:35.866 "ddgst": false 00:17:35.866 }, 00:17:35.866 "method": "bdev_nvme_attach_controller" 00:17:35.866 }' 00:17:35.866 [2024-07-13 21:04:26.612985] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:35.866 [2024-07-13 21:04:26.613044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515881 ] 00:17:35.866 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.866 [2024-07-13 21:04:26.684488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.866 [2024-07-13 21:04:26.723091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.125 Running I/O for 1 seconds... 00:17:37.063 00:17:37.063 Latency(us) 00:17:37.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.063 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:37.063 Verification LBA range: start 0x0 length 0x400 00:17:37.063 Nvme0n1 : 1.01 3117.61 194.85 0.00 0.00 20121.03 976.49 42781.90 00:17:37.063 =================================================================================================================== 00:17:37.063 Total : 3117.61 194.85 0.00 0.00 20121.03 976.49 42781.90 00:17:37.322 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3515598 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:37.322 rmmod nvme_rdma 00:17:37.322 rmmod nvme_fabrics 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3515527 ']' 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3515527 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3515527 ']' 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3515527 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:37.322 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3515527 00:17:37.581 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:37.581 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:37.581 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3515527' 00:17:37.581 killing process with pid 3515527 00:17:37.581 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3515527 00:17:37.581 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3515527 00:17:37.581 [2024-07-13 21:04:28.463029] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:37.841 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:37.841 21:04:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:37.841 21:04:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:37.841 00:17:37.841 real 0m10.521s 00:17:37.841 user 0m21.705s 00:17:37.841 sys 0m5.638s 00:17:37.841 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.841 21:04:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:37.841 ************************************ 00:17:37.841 END TEST nvmf_host_management 00:17:37.841 ************************************ 00:17:37.842 21:04:28 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:37.842 21:04:28 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:37.842 21:04:28 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.842 21:04:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:37.842 ************************************ 00:17:37.842 START TEST nvmf_lvol 00:17:37.842 ************************************ 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:37.842 * Looking for test storage... 00:17:37.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.842 21:04:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:44.414 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:44.414 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:44.414 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:44.414 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:44.414 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:44.414 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:44.414 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:44.415 altname enp217s0f0np0 00:17:44.415 altname ens818f0np0 00:17:44.415 inet 192.168.100.8/24 scope global mlx_0_0 00:17:44.415 valid_lft forever preferred_lft forever 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:44.415 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:44.415 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:44.415 altname enp217s0f1np1 00:17:44.415 altname ens818f1np1 00:17:44.415 inet 192.168.100.9/24 scope global mlx_0_1 00:17:44.415 valid_lft forever preferred_lft forever 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:44.415 192.168.100.9' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:44.415 192.168.100.9' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:44.415 192.168.100.9' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3519325 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3519325 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3519325 ']' 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:44.415 [2024-07-13 21:04:34.720043] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:44.415 [2024-07-13 21:04:34.720099] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.415 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.415 [2024-07-13 21:04:34.790291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:44.415 [2024-07-13 21:04:34.829058] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.415 [2024-07-13 21:04:34.829098] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.415 [2024-07-13 21:04:34.829108] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.415 [2024-07-13 21:04:34.829116] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.415 [2024-07-13 21:04:34.829139] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.415 [2024-07-13 21:04:34.829187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.415 [2024-07-13 21:04:34.829279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.415 [2024-07-13 21:04:34.829282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.415 21:04:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:44.415 [2024-07-13 21:04:35.141423] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f17120/0x1f1b610) succeed. 00:17:44.415 [2024-07-13 21:04:35.151587] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f186c0/0x1f5cca0) succeed. 00:17:44.415 21:04:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.674 21:04:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:44.674 21:04:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.932 21:04:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:44.932 21:04:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:44.932 21:04:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:45.200 21:04:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8a9bec24-a641-4d26-9380-eb94639bc354 00:17:45.200 21:04:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8a9bec24-a641-4d26-9380-eb94639bc354 lvol 20 00:17:45.501 21:04:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f1afe496-9a75-458e-a5bd-38815ef80a9b 00:17:45.501 21:04:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:45.501 21:04:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f1afe496-9a75-458e-a5bd-38815ef80a9b 00:17:45.760 21:04:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:46.019 [2024-07-13 21:04:36.679683] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:46.019 21:04:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:46.019 21:04:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3519870 00:17:46.019 21:04:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:46.019 21:04:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:46.277 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.212 21:04:37 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f1afe496-9a75-458e-a5bd-38815ef80a9b MY_SNAPSHOT 00:17:47.212 21:04:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3256df35-6887-495a-ba07-0015f3b9ce56 00:17:47.212 21:04:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f1afe496-9a75-458e-a5bd-38815ef80a9b 30 00:17:47.470 21:04:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3256df35-6887-495a-ba07-0015f3b9ce56 MY_CLONE 00:17:47.728 21:04:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=eca072ec-a922-4954-91dc-4f089c702fcb 00:17:47.728 21:04:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate eca072ec-a922-4954-91dc-4f089c702fcb 00:17:47.985 21:04:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3519870 00:17:57.959 Initializing NVMe Controllers 00:17:57.959 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:57.959 Controller IO queue size 128, less than required. 00:17:57.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:57.959 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:57.959 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:57.959 Initialization complete. Launching workers. 00:17:57.959 ======================================================== 00:17:57.959 Latency(us) 00:17:57.959 Device Information : IOPS MiB/s Average min max 00:17:57.959 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16242.50 63.45 7882.43 1941.21 50943.04 00:17:57.959 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16155.10 63.11 7924.93 3507.52 47358.32 00:17:57.959 ======================================================== 00:17:57.959 Total : 32397.60 126.55 7903.62 1941.21 50943.04 00:17:57.959 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f1afe496-9a75-458e-a5bd-38815ef80a9b 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a9bec24-a641-4d26-9380-eb94639bc354 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.959 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:57.959 rmmod nvme_rdma 00:17:57.959 rmmod nvme_fabrics 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3519325 ']' 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3519325 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3519325 ']' 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3519325 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3519325 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3519325' 00:17:58.219 killing process with pid 3519325 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3519325 00:17:58.219 21:04:48 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3519325 00:17:58.486 21:04:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:58.486 21:04:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:58.486 00:17:58.486 real 0m20.629s 00:17:58.486 user 1m8.886s 00:17:58.486 sys 0m5.679s 00:17:58.486 21:04:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.486 21:04:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:58.486 ************************************ 00:17:58.486 END TEST nvmf_lvol 00:17:58.486 ************************************ 00:17:58.486 21:04:49 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:58.486 21:04:49 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:58.486 21:04:49 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.486 21:04:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:58.486 ************************************ 00:17:58.486 START TEST nvmf_lvs_grow 00:17:58.486 ************************************ 00:17:58.486 21:04:49 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:58.486 * Looking for test storage... 00:17:58.752 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:58.752 21:04:49 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.753 21:04:49 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:05.323 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:05.324 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:05.324 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:05.324 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:05.324 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:05.324 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:05.324 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:05.324 altname enp217s0f0np0 00:18:05.324 altname ens818f0np0 00:18:05.324 inet 192.168.100.8/24 scope global mlx_0_0 00:18:05.324 valid_lft forever preferred_lft forever 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:05.324 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:05.324 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:05.324 altname enp217s0f1np1 00:18:05.324 altname ens818f1np1 00:18:05.324 inet 192.168.100.9/24 scope global mlx_0_1 00:18:05.324 valid_lft forever preferred_lft forever 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:05.324 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:05.324 192.168.100.9' 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:05.325 192.168.100.9' 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:05.325 192.168.100.9' 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3524988 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3524988 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3524988 ']' 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:05.325 21:04:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:05.325 [2024-07-13 21:04:55.514357] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:05.325 [2024-07-13 21:04:55.514408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.325 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.325 [2024-07-13 21:04:55.586223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.325 [2024-07-13 21:04:55.624113] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.325 [2024-07-13 21:04:55.624156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.325 [2024-07-13 21:04:55.624169] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.325 [2024-07-13 21:04:55.624194] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.325 [2024-07-13 21:04:55.624201] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.325 [2024-07-13 21:04:55.624224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.584 21:04:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:05.584 21:04:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:18:05.584 21:04:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.584 21:04:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.584 21:04:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:05.584 21:04:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.584 21:04:56 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:05.843 [2024-07-13 21:04:56.515780] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e07b50/0x1e0c040) succeed. 00:18:05.843 [2024-07-13 21:04:56.525018] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e09050/0x1e4d6d0) succeed. 00:18:05.843 21:04:56 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:18:05.843 21:04:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:05.843 21:04:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:05.843 21:04:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:05.843 ************************************ 00:18:05.844 START TEST lvs_grow_clean 00:18:05.844 ************************************ 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:05.844 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:06.102 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:06.102 21:04:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:06.361 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=68970ba2-99d5-43b5-b141-467587d18c8f 00:18:06.361 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:06.361 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:06.361 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:06.361 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:06.361 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 68970ba2-99d5-43b5-b141-467587d18c8f lvol 150 00:18:06.620 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f0ab72c0-5cde-4edb-95da-26cc1ee5d937 00:18:06.620 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:06.620 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:06.620 [2024-07-13 21:04:57.499249] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:06.620 [2024-07-13 21:04:57.499297] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:06.620 true 00:18:06.880 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:06.880 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:06.880 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:06.880 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:07.139 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f0ab72c0-5cde-4edb-95da-26cc1ee5d937 00:18:07.139 21:04:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:07.397 [2024-07-13 21:04:58.153437] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:07.397 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3525478 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3525478 /var/tmp/bdevperf.sock 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3525478 ']' 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:07.657 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:07.657 [2024-07-13 21:04:58.373366] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:07.657 [2024-07-13 21:04:58.373423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525478 ] 00:18:07.657 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.657 [2024-07-13 21:04:58.443698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.657 [2024-07-13 21:04:58.481561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.916 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:07.916 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:18:07.916 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:07.916 Nvme0n1 00:18:08.176 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:08.176 [ 00:18:08.176 { 00:18:08.176 "name": "Nvme0n1", 00:18:08.176 "aliases": [ 00:18:08.176 "f0ab72c0-5cde-4edb-95da-26cc1ee5d937" 00:18:08.176 ], 00:18:08.176 "product_name": "NVMe disk", 00:18:08.176 "block_size": 4096, 00:18:08.176 "num_blocks": 38912, 00:18:08.176 "uuid": "f0ab72c0-5cde-4edb-95da-26cc1ee5d937", 00:18:08.176 "assigned_rate_limits": { 00:18:08.176 "rw_ios_per_sec": 0, 00:18:08.176 "rw_mbytes_per_sec": 0, 00:18:08.176 "r_mbytes_per_sec": 0, 00:18:08.176 "w_mbytes_per_sec": 0 00:18:08.176 }, 00:18:08.176 "claimed": false, 00:18:08.176 "zoned": false, 00:18:08.176 "supported_io_types": { 00:18:08.176 "read": true, 00:18:08.176 "write": true, 00:18:08.176 "unmap": true, 00:18:08.176 "write_zeroes": true, 00:18:08.176 "flush": true, 00:18:08.176 "reset": true, 00:18:08.176 "compare": true, 00:18:08.176 "compare_and_write": true, 00:18:08.176 "abort": true, 00:18:08.176 "nvme_admin": true, 00:18:08.176 "nvme_io": true 00:18:08.176 }, 00:18:08.176 "memory_domains": [ 00:18:08.176 { 00:18:08.176 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:08.176 "dma_device_type": 0 00:18:08.176 } 00:18:08.176 ], 00:18:08.176 "driver_specific": { 00:18:08.176 "nvme": [ 00:18:08.176 { 00:18:08.176 "trid": { 00:18:08.176 "trtype": "RDMA", 00:18:08.176 "adrfam": "IPv4", 00:18:08.176 "traddr": "192.168.100.8", 00:18:08.176 "trsvcid": "4420", 00:18:08.176 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:08.176 }, 00:18:08.176 "ctrlr_data": { 00:18:08.176 "cntlid": 1, 00:18:08.176 "vendor_id": "0x8086", 00:18:08.176 "model_number": "SPDK bdev Controller", 00:18:08.176 "serial_number": "SPDK0", 00:18:08.176 "firmware_revision": "24.05.1", 00:18:08.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:08.176 "oacs": { 00:18:08.176 "security": 0, 00:18:08.176 "format": 0, 00:18:08.176 "firmware": 0, 00:18:08.176 "ns_manage": 0 00:18:08.176 }, 00:18:08.176 "multi_ctrlr": true, 00:18:08.176 "ana_reporting": false 00:18:08.176 }, 00:18:08.176 "vs": { 00:18:08.176 "nvme_version": "1.3" 00:18:08.176 }, 00:18:08.176 "ns_data": { 00:18:08.176 "id": 1, 00:18:08.176 "can_share": true 00:18:08.176 } 00:18:08.176 } 00:18:08.176 ], 00:18:08.176 "mp_policy": "active_passive" 00:18:08.176 } 00:18:08.176 } 00:18:08.176 ] 00:18:08.176 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3525724 00:18:08.176 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:08.176 21:04:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:08.176 Running I/O for 10 seconds... 00:18:09.555 Latency(us) 00:18:09.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.555 Nvme0n1 : 1.00 35104.00 137.12 0.00 0.00 0.00 0.00 0.00 00:18:09.555 =================================================================================================================== 00:18:09.555 Total : 35104.00 137.12 0.00 0.00 0.00 0.00 0.00 00:18:09.555 00:18:10.122 21:05:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:10.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.381 Nvme0n1 : 2.00 34992.50 136.69 0.00 0.00 0.00 0.00 0.00 00:18:10.381 =================================================================================================================== 00:18:10.381 Total : 34992.50 136.69 0.00 0.00 0.00 0.00 0.00 00:18:10.381 00:18:10.381 true 00:18:10.381 21:05:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:10.381 21:05:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:10.641 21:05:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:10.641 21:05:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:10.641 21:05:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3525724 00:18:11.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.209 Nvme0n1 : 3.00 35242.67 137.67 0.00 0.00 0.00 0.00 0.00 00:18:11.209 =================================================================================================================== 00:18:11.209 Total : 35242.67 137.67 0.00 0.00 0.00 0.00 0.00 00:18:11.209 00:18:12.674 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.674 Nvme0n1 : 4.00 35448.00 138.47 0.00 0.00 0.00 0.00 0.00 00:18:12.674 =================================================================================================================== 00:18:12.674 Total : 35448.00 138.47 0.00 0.00 0.00 0.00 0.00 00:18:12.674 00:18:13.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.241 Nvme0n1 : 5.00 35564.40 138.92 0.00 0.00 0.00 0.00 0.00 00:18:13.241 =================================================================================================================== 00:18:13.241 Total : 35564.40 138.92 0.00 0.00 0.00 0.00 0.00 00:18:13.241 00:18:14.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.178 Nvme0n1 : 6.00 35652.83 139.27 0.00 0.00 0.00 0.00 0.00 00:18:14.178 =================================================================================================================== 00:18:14.178 Total : 35652.83 139.27 0.00 0.00 0.00 0.00 0.00 00:18:14.178 00:18:15.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.557 Nvme0n1 : 7.00 35725.29 139.55 0.00 0.00 0.00 0.00 0.00 00:18:15.557 =================================================================================================================== 00:18:15.557 Total : 35725.29 139.55 0.00 0.00 0.00 0.00 0.00 00:18:15.557 00:18:16.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.494 Nvme0n1 : 8.00 35767.62 139.72 0.00 0.00 0.00 0.00 0.00 00:18:16.494 =================================================================================================================== 00:18:16.494 Total : 35767.62 139.72 0.00 0.00 0.00 0.00 0.00 00:18:16.494 00:18:17.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.431 Nvme0n1 : 9.00 35808.22 139.88 0.00 0.00 0.00 0.00 0.00 00:18:17.431 =================================================================================================================== 00:18:17.431 Total : 35808.22 139.88 0.00 0.00 0.00 0.00 0.00 00:18:17.431 00:18:18.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.368 Nvme0n1 : 10.00 35830.30 139.96 0.00 0.00 0.00 0.00 0.00 00:18:18.368 =================================================================================================================== 00:18:18.368 Total : 35830.30 139.96 0.00 0.00 0.00 0.00 0.00 00:18:18.368 00:18:18.368 00:18:18.368 Latency(us) 00:18:18.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.368 Nvme0n1 : 10.00 35831.72 139.97 0.00 0.00 3569.44 2437.94 12215.91 00:18:18.368 =================================================================================================================== 00:18:18.368 Total : 35831.72 139.97 0.00 0.00 3569.44 2437.94 12215.91 00:18:18.368 0 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3525478 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3525478 ']' 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3525478 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3525478 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3525478' 00:18:18.368 killing process with pid 3525478 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3525478 00:18:18.368 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.368 00:18:18.368 Latency(us) 00:18:18.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.368 =================================================================================================================== 00:18:18.368 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.368 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3525478 00:18:18.628 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:18.628 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:18.887 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:18.887 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:19.153 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:19.153 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:18:19.153 21:05:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:19.414 [2024-07-13 21:05:10.057357] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:19.414 request: 00:18:19.414 { 00:18:19.414 "uuid": "68970ba2-99d5-43b5-b141-467587d18c8f", 00:18:19.414 "method": "bdev_lvol_get_lvstores", 00:18:19.414 "req_id": 1 00:18:19.414 } 00:18:19.414 Got JSON-RPC error response 00:18:19.414 response: 00:18:19.414 { 00:18:19.414 "code": -19, 00:18:19.414 "message": "No such device" 00:18:19.414 } 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:19.414 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:19.673 aio_bdev 00:18:19.673 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f0ab72c0-5cde-4edb-95da-26cc1ee5d937 00:18:19.673 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=f0ab72c0-5cde-4edb-95da-26cc1ee5d937 00:18:19.673 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:19.673 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:18:19.673 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:19.673 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:19.673 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:19.931 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f0ab72c0-5cde-4edb-95da-26cc1ee5d937 -t 2000 00:18:19.931 [ 00:18:19.931 { 00:18:19.931 "name": "f0ab72c0-5cde-4edb-95da-26cc1ee5d937", 00:18:19.931 "aliases": [ 00:18:19.931 "lvs/lvol" 00:18:19.931 ], 00:18:19.931 "product_name": "Logical Volume", 00:18:19.931 "block_size": 4096, 00:18:19.931 "num_blocks": 38912, 00:18:19.931 "uuid": "f0ab72c0-5cde-4edb-95da-26cc1ee5d937", 00:18:19.931 "assigned_rate_limits": { 00:18:19.931 "rw_ios_per_sec": 0, 00:18:19.931 "rw_mbytes_per_sec": 0, 00:18:19.931 "r_mbytes_per_sec": 0, 00:18:19.931 "w_mbytes_per_sec": 0 00:18:19.931 }, 00:18:19.931 "claimed": false, 00:18:19.931 "zoned": false, 00:18:19.931 "supported_io_types": { 00:18:19.931 "read": true, 00:18:19.931 "write": true, 00:18:19.931 "unmap": true, 00:18:19.931 "write_zeroes": true, 00:18:19.931 "flush": false, 00:18:19.931 "reset": true, 00:18:19.931 "compare": false, 00:18:19.931 "compare_and_write": false, 00:18:19.931 "abort": false, 00:18:19.931 "nvme_admin": false, 00:18:19.931 "nvme_io": false 00:18:19.931 }, 00:18:19.931 "driver_specific": { 00:18:19.931 "lvol": { 00:18:19.931 "lvol_store_uuid": "68970ba2-99d5-43b5-b141-467587d18c8f", 00:18:19.931 "base_bdev": "aio_bdev", 00:18:19.932 "thin_provision": false, 00:18:19.932 "num_allocated_clusters": 38, 00:18:19.932 "snapshot": false, 00:18:19.932 "clone": false, 00:18:19.932 "esnap_clone": false 00:18:19.932 } 00:18:19.932 } 00:18:19.932 } 00:18:19.932 ] 00:18:19.932 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:18:19.932 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:19.932 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:20.190 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:20.190 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:20.190 21:05:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:20.448 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:20.448 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f0ab72c0-5cde-4edb-95da-26cc1ee5d937 00:18:20.448 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68970ba2-99d5-43b5-b141-467587d18c8f 00:18:20.706 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:20.965 00:18:20.965 real 0m15.030s 00:18:20.965 user 0m14.746s 00:18:20.965 sys 0m1.157s 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:20.965 ************************************ 00:18:20.965 END TEST lvs_grow_clean 00:18:20.965 ************************************ 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:20.965 ************************************ 00:18:20.965 START TEST lvs_grow_dirty 00:18:20.965 ************************************ 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:20.965 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:21.224 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:21.224 21:05:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:21.224 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=14105590-53db-4b63-9a87-33ff0c1e960b 00:18:21.224 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:21.224 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:21.484 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:21.484 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:21.484 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 14105590-53db-4b63-9a87-33ff0c1e960b lvol 150 00:18:21.743 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ac301c5c-04f8-418c-a976-895aaeaa1618 00:18:21.743 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:21.743 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:21.743 [2024-07-13 21:05:12.588605] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:21.743 [2024-07-13 21:05:12.588657] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:21.743 true 00:18:21.743 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:21.743 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:22.001 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:22.001 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:22.260 21:05:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ac301c5c-04f8-418c-a976-895aaeaa1618 00:18:22.260 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:22.527 [2024-07-13 21:05:13.270831] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:22.527 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3528195 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3528195 /var/tmp/bdevperf.sock 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3528195 ']' 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:22.788 21:05:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:22.788 [2024-07-13 21:05:13.495391] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:22.788 [2024-07-13 21:05:13.495446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528195 ] 00:18:22.788 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.788 [2024-07-13 21:05:13.566017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.788 [2024-07-13 21:05:13.605782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.725 21:05:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:23.725 21:05:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:23.725 21:05:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:23.725 Nvme0n1 00:18:23.725 21:05:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:23.984 [ 00:18:23.984 { 00:18:23.984 "name": "Nvme0n1", 00:18:23.984 "aliases": [ 00:18:23.984 "ac301c5c-04f8-418c-a976-895aaeaa1618" 00:18:23.984 ], 00:18:23.984 "product_name": "NVMe disk", 00:18:23.984 "block_size": 4096, 00:18:23.984 "num_blocks": 38912, 00:18:23.984 "uuid": "ac301c5c-04f8-418c-a976-895aaeaa1618", 00:18:23.984 "assigned_rate_limits": { 00:18:23.984 "rw_ios_per_sec": 0, 00:18:23.984 "rw_mbytes_per_sec": 0, 00:18:23.984 "r_mbytes_per_sec": 0, 00:18:23.984 "w_mbytes_per_sec": 0 00:18:23.984 }, 00:18:23.984 "claimed": false, 00:18:23.984 "zoned": false, 00:18:23.984 "supported_io_types": { 00:18:23.984 "read": true, 00:18:23.984 "write": true, 00:18:23.984 "unmap": true, 00:18:23.984 "write_zeroes": true, 00:18:23.984 "flush": true, 00:18:23.984 "reset": true, 00:18:23.984 "compare": true, 00:18:23.984 "compare_and_write": true, 00:18:23.984 "abort": true, 00:18:23.984 "nvme_admin": true, 00:18:23.984 "nvme_io": true 00:18:23.984 }, 00:18:23.984 "memory_domains": [ 00:18:23.984 { 00:18:23.984 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:23.984 "dma_device_type": 0 00:18:23.984 } 00:18:23.984 ], 00:18:23.984 "driver_specific": { 00:18:23.984 "nvme": [ 00:18:23.984 { 00:18:23.984 "trid": { 00:18:23.984 "trtype": "RDMA", 00:18:23.984 "adrfam": "IPv4", 00:18:23.984 "traddr": "192.168.100.8", 00:18:23.984 "trsvcid": "4420", 00:18:23.984 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:23.984 }, 00:18:23.984 "ctrlr_data": { 00:18:23.984 "cntlid": 1, 00:18:23.984 "vendor_id": "0x8086", 00:18:23.984 "model_number": "SPDK bdev Controller", 00:18:23.984 "serial_number": "SPDK0", 00:18:23.984 "firmware_revision": "24.05.1", 00:18:23.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:23.984 "oacs": { 00:18:23.984 "security": 0, 00:18:23.984 "format": 0, 00:18:23.984 "firmware": 0, 00:18:23.984 "ns_manage": 0 00:18:23.984 }, 00:18:23.984 "multi_ctrlr": true, 00:18:23.984 "ana_reporting": false 00:18:23.984 }, 00:18:23.984 "vs": { 00:18:23.984 "nvme_version": "1.3" 00:18:23.984 }, 00:18:23.984 "ns_data": { 00:18:23.984 "id": 1, 00:18:23.984 "can_share": true 00:18:23.984 } 00:18:23.984 } 00:18:23.984 ], 00:18:23.984 "mp_policy": "active_passive" 00:18:23.984 } 00:18:23.984 } 00:18:23.984 ] 00:18:23.984 21:05:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3528441 00:18:23.984 21:05:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:23.984 21:05:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:23.984 Running I/O for 10 seconds... 00:18:25.363 Latency(us) 00:18:25.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.363 Nvme0n1 : 1.00 35235.00 137.64 0.00 0.00 0.00 0.00 0.00 00:18:25.363 =================================================================================================================== 00:18:25.363 Total : 35235.00 137.64 0.00 0.00 0.00 0.00 0.00 00:18:25.363 00:18:25.931 21:05:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:25.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.931 Nvme0n1 : 2.00 35536.50 138.81 0.00 0.00 0.00 0.00 0.00 00:18:25.931 =================================================================================================================== 00:18:25.931 Total : 35536.50 138.81 0.00 0.00 0.00 0.00 0.00 00:18:25.931 00:18:26.191 true 00:18:26.191 21:05:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:26.191 21:05:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:26.450 21:05:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:26.450 21:05:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:26.450 21:05:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3528441 00:18:27.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.018 Nvme0n1 : 3.00 35627.67 139.17 0.00 0.00 0.00 0.00 0.00 00:18:27.018 =================================================================================================================== 00:18:27.018 Total : 35627.67 139.17 0.00 0.00 0.00 0.00 0.00 00:18:27.018 00:18:27.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.956 Nvme0n1 : 4.00 35744.25 139.63 0.00 0.00 0.00 0.00 0.00 00:18:27.956 =================================================================================================================== 00:18:27.956 Total : 35744.25 139.63 0.00 0.00 0.00 0.00 0.00 00:18:27.956 00:18:29.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.334 Nvme0n1 : 5.00 35808.00 139.88 0.00 0.00 0.00 0.00 0.00 00:18:29.334 =================================================================================================================== 00:18:29.334 Total : 35808.00 139.88 0.00 0.00 0.00 0.00 0.00 00:18:29.334 00:18:29.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.969 Nvme0n1 : 6.00 35812.50 139.89 0.00 0.00 0.00 0.00 0.00 00:18:29.969 =================================================================================================================== 00:18:29.969 Total : 35812.50 139.89 0.00 0.00 0.00 0.00 0.00 00:18:29.969 00:18:31.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.344 Nvme0n1 : 7.00 35836.29 139.99 0.00 0.00 0.00 0.00 0.00 00:18:31.344 =================================================================================================================== 00:18:31.344 Total : 35836.29 139.99 0.00 0.00 0.00 0.00 0.00 00:18:31.344 00:18:32.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.277 Nvme0n1 : 8.00 35844.38 140.02 0.00 0.00 0.00 0.00 0.00 00:18:32.277 =================================================================================================================== 00:18:32.277 Total : 35844.38 140.02 0.00 0.00 0.00 0.00 0.00 00:18:32.277 00:18:33.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.210 Nvme0n1 : 9.00 35875.22 140.14 0.00 0.00 0.00 0.00 0.00 00:18:33.210 =================================================================================================================== 00:18:33.210 Total : 35875.22 140.14 0.00 0.00 0.00 0.00 0.00 00:18:33.210 00:18:34.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.144 Nvme0n1 : 10.00 35897.80 140.23 0.00 0.00 0.00 0.00 0.00 00:18:34.144 =================================================================================================================== 00:18:34.144 Total : 35897.80 140.23 0.00 0.00 0.00 0.00 0.00 00:18:34.144 00:18:34.144 00:18:34.144 Latency(us) 00:18:34.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.145 Nvme0n1 : 10.00 35897.33 140.22 0.00 0.00 3562.82 2634.55 9856.61 00:18:34.145 =================================================================================================================== 00:18:34.145 Total : 35897.33 140.22 0.00 0.00 3562.82 2634.55 9856.61 00:18:34.145 0 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3528195 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3528195 ']' 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3528195 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3528195 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3528195' 00:18:34.145 killing process with pid 3528195 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3528195 00:18:34.145 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.145 00:18:34.145 Latency(us) 00:18:34.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.145 =================================================================================================================== 00:18:34.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.145 21:05:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3528195 00:18:34.403 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:34.403 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:34.662 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:34.662 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3524988 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3524988 00:18:34.920 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3524988 Killed "${NVMF_APP[@]}" "$@" 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:34.920 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3530306 00:18:34.921 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3530306 00:18:34.921 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:34.921 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3530306 ']' 00:18:34.921 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.921 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:34.921 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.921 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:34.921 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:34.921 [2024-07-13 21:05:25.733101] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:34.921 [2024-07-13 21:05:25.733159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.921 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.921 [2024-07-13 21:05:25.806330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.179 [2024-07-13 21:05:25.843870] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.179 [2024-07-13 21:05:25.843912] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.179 [2024-07-13 21:05:25.843922] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.179 [2024-07-13 21:05:25.843931] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.179 [2024-07-13 21:05:25.843937] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.179 [2024-07-13 21:05:25.843964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.179 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.179 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:35.179 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.179 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.179 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:35.179 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.179 21:05:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:35.437 [2024-07-13 21:05:26.126354] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:35.437 [2024-07-13 21:05:26.126453] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:35.437 [2024-07-13 21:05:26.126479] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:35.437 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:35.437 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ac301c5c-04f8-418c-a976-895aaeaa1618 00:18:35.437 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=ac301c5c-04f8-418c-a976-895aaeaa1618 00:18:35.437 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:35.437 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:35.437 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:35.437 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:35.437 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:35.438 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ac301c5c-04f8-418c-a976-895aaeaa1618 -t 2000 00:18:35.696 [ 00:18:35.696 { 00:18:35.696 "name": "ac301c5c-04f8-418c-a976-895aaeaa1618", 00:18:35.696 "aliases": [ 00:18:35.696 "lvs/lvol" 00:18:35.696 ], 00:18:35.696 "product_name": "Logical Volume", 00:18:35.696 "block_size": 4096, 00:18:35.696 "num_blocks": 38912, 00:18:35.696 "uuid": "ac301c5c-04f8-418c-a976-895aaeaa1618", 00:18:35.696 "assigned_rate_limits": { 00:18:35.696 "rw_ios_per_sec": 0, 00:18:35.696 "rw_mbytes_per_sec": 0, 00:18:35.696 "r_mbytes_per_sec": 0, 00:18:35.696 "w_mbytes_per_sec": 0 00:18:35.696 }, 00:18:35.696 "claimed": false, 00:18:35.696 "zoned": false, 00:18:35.696 "supported_io_types": { 00:18:35.696 "read": true, 00:18:35.696 "write": true, 00:18:35.696 "unmap": true, 00:18:35.696 "write_zeroes": true, 00:18:35.696 "flush": false, 00:18:35.696 "reset": true, 00:18:35.696 "compare": false, 00:18:35.696 "compare_and_write": false, 00:18:35.696 "abort": false, 00:18:35.696 "nvme_admin": false, 00:18:35.696 "nvme_io": false 00:18:35.696 }, 00:18:35.696 "driver_specific": { 00:18:35.696 "lvol": { 00:18:35.696 "lvol_store_uuid": "14105590-53db-4b63-9a87-33ff0c1e960b", 00:18:35.696 "base_bdev": "aio_bdev", 00:18:35.696 "thin_provision": false, 00:18:35.696 "num_allocated_clusters": 38, 00:18:35.696 "snapshot": false, 00:18:35.696 "clone": false, 00:18:35.696 "esnap_clone": false 00:18:35.696 } 00:18:35.696 } 00:18:35.696 } 00:18:35.696 ] 00:18:35.696 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:35.696 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:35.696 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:35.956 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:35.956 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:35.956 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:35.956 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:35.956 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:36.215 [2024-07-13 21:05:26.950501] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:36.215 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:36.216 21:05:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:36.475 request: 00:18:36.475 { 00:18:36.475 "uuid": "14105590-53db-4b63-9a87-33ff0c1e960b", 00:18:36.475 "method": "bdev_lvol_get_lvstores", 00:18:36.475 "req_id": 1 00:18:36.475 } 00:18:36.475 Got JSON-RPC error response 00:18:36.475 response: 00:18:36.475 { 00:18:36.475 "code": -19, 00:18:36.475 "message": "No such device" 00:18:36.475 } 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:36.475 aio_bdev 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ac301c5c-04f8-418c-a976-895aaeaa1618 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=ac301c5c-04f8-418c-a976-895aaeaa1618 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:36.475 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:36.734 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ac301c5c-04f8-418c-a976-895aaeaa1618 -t 2000 00:18:36.993 [ 00:18:36.993 { 00:18:36.993 "name": "ac301c5c-04f8-418c-a976-895aaeaa1618", 00:18:36.993 "aliases": [ 00:18:36.993 "lvs/lvol" 00:18:36.993 ], 00:18:36.993 "product_name": "Logical Volume", 00:18:36.993 "block_size": 4096, 00:18:36.993 "num_blocks": 38912, 00:18:36.993 "uuid": "ac301c5c-04f8-418c-a976-895aaeaa1618", 00:18:36.993 "assigned_rate_limits": { 00:18:36.993 "rw_ios_per_sec": 0, 00:18:36.993 "rw_mbytes_per_sec": 0, 00:18:36.993 "r_mbytes_per_sec": 0, 00:18:36.993 "w_mbytes_per_sec": 0 00:18:36.993 }, 00:18:36.993 "claimed": false, 00:18:36.993 "zoned": false, 00:18:36.993 "supported_io_types": { 00:18:36.993 "read": true, 00:18:36.993 "write": true, 00:18:36.993 "unmap": true, 00:18:36.993 "write_zeroes": true, 00:18:36.993 "flush": false, 00:18:36.993 "reset": true, 00:18:36.993 "compare": false, 00:18:36.993 "compare_and_write": false, 00:18:36.993 "abort": false, 00:18:36.993 "nvme_admin": false, 00:18:36.993 "nvme_io": false 00:18:36.993 }, 00:18:36.993 "driver_specific": { 00:18:36.993 "lvol": { 00:18:36.993 "lvol_store_uuid": "14105590-53db-4b63-9a87-33ff0c1e960b", 00:18:36.993 "base_bdev": "aio_bdev", 00:18:36.993 "thin_provision": false, 00:18:36.993 "num_allocated_clusters": 38, 00:18:36.993 "snapshot": false, 00:18:36.993 "clone": false, 00:18:36.993 "esnap_clone": false 00:18:36.993 } 00:18:36.993 } 00:18:36.993 } 00:18:36.993 ] 00:18:36.993 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:36.994 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:36.994 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:36.994 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:36.994 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:36.994 21:05:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:37.253 21:05:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:37.253 21:05:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac301c5c-04f8-418c-a976-895aaeaa1618 00:18:37.512 21:05:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14105590-53db-4b63-9a87-33ff0c1e960b 00:18:37.512 21:05:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:37.771 00:18:37.771 real 0m16.814s 00:18:37.771 user 0m44.439s 00:18:37.771 sys 0m3.284s 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:37.771 ************************************ 00:18:37.771 END TEST lvs_grow_dirty 00:18:37.771 ************************************ 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:37.771 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:37.772 nvmf_trace.0 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.772 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:37.772 rmmod nvme_rdma 00:18:38.031 rmmod nvme_fabrics 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3530306 ']' 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3530306 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3530306 ']' 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3530306 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3530306 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3530306' 00:18:38.031 killing process with pid 3530306 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3530306 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3530306 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:38.031 00:18:38.031 real 0m39.642s 00:18:38.031 user 1m4.345s 00:18:38.031 sys 0m9.558s 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:38.031 21:05:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:38.031 ************************************ 00:18:38.031 END TEST nvmf_lvs_grow 00:18:38.031 ************************************ 00:18:38.320 21:05:28 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:38.320 21:05:28 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:38.320 21:05:28 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:38.320 21:05:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:38.320 ************************************ 00:18:38.320 START TEST nvmf_bdev_io_wait 00:18:38.320 ************************************ 00:18:38.320 21:05:28 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:38.320 * Looking for test storage... 00:18:38.320 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:38.320 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.320 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:38.320 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.320 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.320 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.320 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.320 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.320 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.321 21:05:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:44.889 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:44.890 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:44.890 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:44.890 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:44.890 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:44.890 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:45.149 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:45.149 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:45.149 altname enp217s0f0np0 00:18:45.149 altname ens818f0np0 00:18:45.149 inet 192.168.100.8/24 scope global mlx_0_0 00:18:45.149 valid_lft forever preferred_lft forever 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:45.149 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:45.149 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:45.149 altname enp217s0f1np1 00:18:45.149 altname ens818f1np1 00:18:45.149 inet 192.168.100.9/24 scope global mlx_0_1 00:18:45.149 valid_lft forever preferred_lft forever 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:45.149 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:45.150 192.168.100.9' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:45.150 192.168.100.9' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:45.150 192.168.100.9' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3534092 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3534092 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3534092 ']' 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:45.150 21:05:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:45.150 [2024-07-13 21:05:36.008235] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:45.150 [2024-07-13 21:05:36.008292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.408 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.408 [2024-07-13 21:05:36.081520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.408 [2024-07-13 21:05:36.123175] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.408 [2024-07-13 21:05:36.123215] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.408 [2024-07-13 21:05:36.123225] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.408 [2024-07-13 21:05:36.123234] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.408 [2024-07-13 21:05:36.123241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.408 [2024-07-13 21:05:36.123292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.408 [2024-07-13 21:05:36.123313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.408 [2024-07-13 21:05:36.123398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.408 [2024-07-13 21:05:36.123400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.975 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:45.975 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:18:45.975 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.975 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.975 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:45.975 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.975 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:45.975 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.975 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:46.234 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.234 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:46.234 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.234 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:46.234 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.234 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:46.234 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.234 21:05:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:46.234 [2024-07-13 21:05:36.951885] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd6cc30/0xd71120) succeed. 00:18:46.234 [2024-07-13 21:05:36.962044] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd6e270/0xdb27b0) succeed. 00:18:46.234 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.234 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:46.234 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.234 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:46.234 Malloc0 00:18:46.234 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.234 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:46.234 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.234 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:46.493 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.493 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.493 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.493 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:46.493 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.493 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:46.493 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.493 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:46.493 [2024-07-13 21:05:37.143891] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:46.493 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3534374 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3534376 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:46.494 { 00:18:46.494 "params": { 00:18:46.494 "name": "Nvme$subsystem", 00:18:46.494 "trtype": "$TEST_TRANSPORT", 00:18:46.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.494 "adrfam": "ipv4", 00:18:46.494 "trsvcid": "$NVMF_PORT", 00:18:46.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.494 "hdgst": ${hdgst:-false}, 00:18:46.494 "ddgst": ${ddgst:-false} 00:18:46.494 }, 00:18:46.494 "method": "bdev_nvme_attach_controller" 00:18:46.494 } 00:18:46.494 EOF 00:18:46.494 )") 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3534378 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:46.494 { 00:18:46.494 "params": { 00:18:46.494 "name": "Nvme$subsystem", 00:18:46.494 "trtype": "$TEST_TRANSPORT", 00:18:46.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.494 "adrfam": "ipv4", 00:18:46.494 "trsvcid": "$NVMF_PORT", 00:18:46.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.494 "hdgst": ${hdgst:-false}, 00:18:46.494 "ddgst": ${ddgst:-false} 00:18:46.494 }, 00:18:46.494 "method": "bdev_nvme_attach_controller" 00:18:46.494 } 00:18:46.494 EOF 00:18:46.494 )") 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3534381 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:46.494 { 00:18:46.494 "params": { 00:18:46.494 "name": "Nvme$subsystem", 00:18:46.494 "trtype": "$TEST_TRANSPORT", 00:18:46.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.494 "adrfam": "ipv4", 00:18:46.494 "trsvcid": "$NVMF_PORT", 00:18:46.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.494 "hdgst": ${hdgst:-false}, 00:18:46.494 "ddgst": ${ddgst:-false} 00:18:46.494 }, 00:18:46.494 "method": "bdev_nvme_attach_controller" 00:18:46.494 } 00:18:46.494 EOF 00:18:46.494 )") 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:46.494 { 00:18:46.494 "params": { 00:18:46.494 "name": "Nvme$subsystem", 00:18:46.494 "trtype": "$TEST_TRANSPORT", 00:18:46.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.494 "adrfam": "ipv4", 00:18:46.494 "trsvcid": "$NVMF_PORT", 00:18:46.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.494 "hdgst": ${hdgst:-false}, 00:18:46.494 "ddgst": ${ddgst:-false} 00:18:46.494 }, 00:18:46.494 "method": "bdev_nvme_attach_controller" 00:18:46.494 } 00:18:46.494 EOF 00:18:46.494 )") 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3534374 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:46.494 "params": { 00:18:46.494 "name": "Nvme1", 00:18:46.494 "trtype": "rdma", 00:18:46.494 "traddr": "192.168.100.8", 00:18:46.494 "adrfam": "ipv4", 00:18:46.494 "trsvcid": "4420", 00:18:46.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.494 "hdgst": false, 00:18:46.494 "ddgst": false 00:18:46.494 }, 00:18:46.494 "method": "bdev_nvme_attach_controller" 00:18:46.494 }' 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:46.494 "params": { 00:18:46.494 "name": "Nvme1", 00:18:46.494 "trtype": "rdma", 00:18:46.494 "traddr": "192.168.100.8", 00:18:46.494 "adrfam": "ipv4", 00:18:46.494 "trsvcid": "4420", 00:18:46.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.494 "hdgst": false, 00:18:46.494 "ddgst": false 00:18:46.494 }, 00:18:46.494 "method": "bdev_nvme_attach_controller" 00:18:46.494 }' 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:46.494 "params": { 00:18:46.494 "name": "Nvme1", 00:18:46.494 "trtype": "rdma", 00:18:46.494 "traddr": "192.168.100.8", 00:18:46.494 "adrfam": "ipv4", 00:18:46.494 "trsvcid": "4420", 00:18:46.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.494 "hdgst": false, 00:18:46.494 "ddgst": false 00:18:46.494 }, 00:18:46.494 "method": "bdev_nvme_attach_controller" 00:18:46.494 }' 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:46.494 21:05:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:46.494 "params": { 00:18:46.494 "name": "Nvme1", 00:18:46.494 "trtype": "rdma", 00:18:46.494 "traddr": "192.168.100.8", 00:18:46.494 "adrfam": "ipv4", 00:18:46.494 "trsvcid": "4420", 00:18:46.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.494 "hdgst": false, 00:18:46.494 "ddgst": false 00:18:46.494 }, 00:18:46.494 "method": "bdev_nvme_attach_controller" 00:18:46.494 }' 00:18:46.494 [2024-07-13 21:05:37.195350] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:46.494 [2024-07-13 21:05:37.195351] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:46.494 [2024-07-13 21:05:37.195407] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 21:05:37.195407] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:46.494 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:46.494 [2024-07-13 21:05:37.195884] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:46.494 [2024-07-13 21:05:37.195926] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:46.494 [2024-07-13 21:05:37.196481] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:46.494 [2024-07-13 21:05:37.196530] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:46.494 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.494 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.754 [2024-07-13 21:05:37.391804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.754 [2024-07-13 21:05:37.417853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:46.754 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.754 [2024-07-13 21:05:37.493110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.754 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.754 [2024-07-13 21:05:37.523122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:46.754 [2024-07-13 21:05:37.550738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.754 [2024-07-13 21:05:37.574557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:47.013 [2024-07-13 21:05:37.645978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.013 [2024-07-13 21:05:37.676848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:47.013 Running I/O for 1 seconds... 00:18:47.013 Running I/O for 1 seconds... 00:18:47.013 Running I/O for 1 seconds... 00:18:47.013 Running I/O for 1 seconds... 00:18:47.951 00:18:47.951 Latency(us) 00:18:47.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.951 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:47.951 Nvme1n1 : 1.01 17673.37 69.04 0.00 0.00 7220.18 5111.81 17301.50 00:18:47.951 =================================================================================================================== 00:18:47.951 Total : 17673.37 69.04 0.00 0.00 7220.18 5111.81 17301.50 00:18:47.951 00:18:47.951 Latency(us) 00:18:47.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.951 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:47.951 Nvme1n1 : 1.00 17304.80 67.60 0.00 0.00 7375.14 4980.74 18140.36 00:18:47.951 =================================================================================================================== 00:18:47.951 Total : 17304.80 67.60 0.00 0.00 7375.14 4980.74 18140.36 00:18:47.951 00:18:47.951 Latency(us) 00:18:47.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.951 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:47.951 Nvme1n1 : 1.00 263812.11 1030.52 0.00 0.00 483.04 190.87 1848.12 00:18:47.951 =================================================================================================================== 00:18:47.951 Total : 263812.11 1030.52 0.00 0.00 483.04 190.87 1848.12 00:18:47.951 00:18:47.951 Latency(us) 00:18:47.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.951 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:47.951 Nvme1n1 : 1.00 14466.33 56.51 0.00 0.00 8827.31 3670.02 20237.52 00:18:47.951 =================================================================================================================== 00:18:47.951 Total : 14466.33 56.51 0.00 0.00 8827.31 3670.02 20237.52 00:18:48.520 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3534376 00:18:48.520 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3534378 00:18:48.520 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3534381 00:18:48.520 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.520 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.520 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:48.520 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.520 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:48.521 rmmod nvme_rdma 00:18:48.521 rmmod nvme_fabrics 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3534092 ']' 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3534092 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3534092 ']' 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3534092 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3534092 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3534092' 00:18:48.521 killing process with pid 3534092 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3534092 00:18:48.521 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3534092 00:18:48.780 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:48.780 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:48.780 00:18:48.780 real 0m10.603s 00:18:48.780 user 0m21.207s 00:18:48.780 sys 0m6.608s 00:18:48.780 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:48.780 21:05:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:48.780 ************************************ 00:18:48.780 END TEST nvmf_bdev_io_wait 00:18:48.780 ************************************ 00:18:48.780 21:05:39 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:48.780 21:05:39 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:48.780 21:05:39 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:48.780 21:05:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:49.040 ************************************ 00:18:49.040 START TEST nvmf_queue_depth 00:18:49.040 ************************************ 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:49.040 * Looking for test storage... 00:18:49.040 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.040 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.041 21:05:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.673 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:55.673 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:55.674 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:55.674 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:55.674 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:55.674 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:55.674 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:55.674 altname enp217s0f0np0 00:18:55.674 altname ens818f0np0 00:18:55.674 inet 192.168.100.8/24 scope global mlx_0_0 00:18:55.674 valid_lft forever preferred_lft forever 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:55.674 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:55.674 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:55.674 altname enp217s0f1np1 00:18:55.674 altname ens818f1np1 00:18:55.674 inet 192.168.100.9/24 scope global mlx_0_1 00:18:55.674 valid_lft forever preferred_lft forever 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:55.674 192.168.100.9' 00:18:55.674 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:55.675 192.168.100.9' 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:55.675 192.168.100.9' 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3538064 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3538064 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3538064 ']' 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:55.675 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:55.675 [2024-07-13 21:05:46.436061] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:55.675 [2024-07-13 21:05:46.436114] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.675 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.675 [2024-07-13 21:05:46.504425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.675 [2024-07-13 21:05:46.542339] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.675 [2024-07-13 21:05:46.542382] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.675 [2024-07-13 21:05:46.542391] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.675 [2024-07-13 21:05:46.542400] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.675 [2024-07-13 21:05:46.542406] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.675 [2024-07-13 21:05:46.542435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 [2024-07-13 21:05:46.699914] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14fee50/0x1503340) succeed. 00:18:55.934 [2024-07-13 21:05:46.708539] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1500350/0x15449d0) succeed. 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 Malloc0 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:55.934 [2024-07-13 21:05:46.798019] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.934 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3538120 00:18:55.935 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:55.935 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:55.935 21:05:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3538120 /var/tmp/bdevperf.sock 00:18:55.935 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3538120 ']' 00:18:55.935 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.935 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:55.935 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.935 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:55.935 21:05:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:56.193 [2024-07-13 21:05:46.846635] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:56.193 [2024-07-13 21:05:46.846682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3538120 ] 00:18:56.193 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.193 [2024-07-13 21:05:46.916811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.193 [2024-07-13 21:05:46.955610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.193 21:05:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:56.194 21:05:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:56.194 21:05:47 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:56.194 21:05:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.194 21:05:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:56.452 NVMe0n1 00:18:56.452 21:05:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.452 21:05:47 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.452 Running I/O for 10 seconds... 00:19:06.439 00:19:06.439 Latency(us) 00:19:06.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.440 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:06.440 Verification LBA range: start 0x0 length 0x4000 00:19:06.440 NVMe0n1 : 10.05 18247.38 71.28 0.00 0.00 55982.37 21915.24 36280.73 00:19:06.440 =================================================================================================================== 00:19:06.440 Total : 18247.38 71.28 0.00 0.00 55982.37 21915.24 36280.73 00:19:06.440 0 00:19:06.440 21:05:57 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3538120 00:19:06.440 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3538120 ']' 00:19:06.440 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3538120 00:19:06.440 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:19:06.440 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:06.440 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3538120 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3538120' 00:19:06.699 killing process with pid 3538120 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3538120 00:19:06.699 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.699 00:19:06.699 Latency(us) 00:19:06.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.699 =================================================================================================================== 00:19:06.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3538120 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.699 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:06.699 rmmod nvme_rdma 00:19:06.699 rmmod nvme_fabrics 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3538064 ']' 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3538064 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3538064 ']' 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3538064 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3538064 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3538064' 00:19:06.959 killing process with pid 3538064 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3538064 00:19:06.959 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3538064 00:19:07.219 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.219 21:05:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:07.219 00:19:07.219 real 0m18.211s 00:19:07.219 user 0m24.088s 00:19:07.219 sys 0m5.599s 00:19:07.219 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:07.219 21:05:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:07.219 ************************************ 00:19:07.219 END TEST nvmf_queue_depth 00:19:07.219 ************************************ 00:19:07.219 21:05:57 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:07.219 21:05:57 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:07.219 21:05:57 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:07.219 21:05:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:07.219 ************************************ 00:19:07.219 START TEST nvmf_target_multipath 00:19:07.219 ************************************ 00:19:07.219 21:05:57 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:07.219 * Looking for test storage... 00:19:07.219 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:19:07.219 21:05:58 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:13.795 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:13.796 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:13.796 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:13.796 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:13.796 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:13.796 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:13.796 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:13.796 altname enp217s0f0np0 00:19:13.796 altname ens818f0np0 00:19:13.796 inet 192.168.100.8/24 scope global mlx_0_0 00:19:13.796 valid_lft forever preferred_lft forever 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:13.796 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:13.796 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:13.796 altname enp217s0f1np1 00:19:13.796 altname ens818f1np1 00:19:13.796 inet 192.168.100.9/24 scope global mlx_0_1 00:19:13.796 valid_lft forever preferred_lft forever 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.796 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:13.797 192.168.100.9' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:13.797 192.168.100.9' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:13.797 192.168.100.9' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:13.797 run this test only with TCP transport for now 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:13.797 rmmod nvme_rdma 00:19:13.797 rmmod nvme_fabrics 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:13.797 00:19:13.797 real 0m6.718s 00:19:13.797 user 0m1.879s 00:19:13.797 sys 0m5.049s 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:13.797 21:06:04 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:13.797 ************************************ 00:19:13.797 END TEST nvmf_target_multipath 00:19:13.797 ************************************ 00:19:14.056 21:06:04 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:14.056 21:06:04 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:14.056 21:06:04 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:14.056 21:06:04 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:14.056 ************************************ 00:19:14.056 START TEST nvmf_zcopy 00:19:14.056 ************************************ 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:14.057 * Looking for test storage... 00:19:14.057 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.057 21:06:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:20.629 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:20.629 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:20.629 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:20.630 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:20.630 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:20.630 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:20.630 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:20.630 altname enp217s0f0np0 00:19:20.630 altname ens818f0np0 00:19:20.630 inet 192.168.100.8/24 scope global mlx_0_0 00:19:20.630 valid_lft forever preferred_lft forever 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:20.630 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:20.630 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:20.630 altname enp217s0f1np1 00:19:20.630 altname ens818f1np1 00:19:20.630 inet 192.168.100.9/24 scope global mlx_0_1 00:19:20.630 valid_lft forever preferred_lft forever 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:20.630 192.168.100.9' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:20.630 192.168.100.9' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:20.630 192.168.100.9' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:20.630 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3546966 00:19:20.631 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3546966 00:19:20.631 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3546966 ']' 00:19:20.631 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.631 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:20.631 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.631 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:20.631 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:20.631 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:20.631 [2024-07-13 21:06:11.475724] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:20.631 [2024-07-13 21:06:11.475777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.631 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.890 [2024-07-13 21:06:11.544343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.890 [2024-07-13 21:06:11.581500] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.890 [2024-07-13 21:06:11.581542] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.890 [2024-07-13 21:06:11.581552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.890 [2024-07-13 21:06:11.581560] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.890 [2024-07-13 21:06:11.581584] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.890 [2024-07-13 21:06:11.581606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:20.890 Unsupported transport: rdma 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@804 -- # type=--id 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@805 -- # id=0 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@816 -- # for n in $shm_files 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:20.890 nvmf_trace.0 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # return 0 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.890 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:20.890 rmmod nvme_rdma 00:19:20.890 rmmod nvme_fabrics 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3546966 ']' 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3546966 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3546966 ']' 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3546966 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3546966 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3546966' 00:19:21.149 killing process with pid 3546966 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3546966 00:19:21.149 21:06:11 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3546966 00:19:21.149 21:06:12 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:21.149 21:06:12 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:21.149 00:19:21.149 real 0m7.238s 00:19:21.149 user 0m2.524s 00:19:21.149 sys 0m5.206s 00:19:21.149 21:06:12 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:21.149 21:06:12 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:21.149 ************************************ 00:19:21.149 END TEST nvmf_zcopy 00:19:21.149 ************************************ 00:19:21.408 21:06:12 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:21.408 21:06:12 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:21.408 21:06:12 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:21.408 21:06:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:21.408 ************************************ 00:19:21.408 START TEST nvmf_nmic 00:19:21.408 ************************************ 00:19:21.408 21:06:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:21.408 * Looking for test storage... 00:19:21.408 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:21.408 21:06:12 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.408 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:21.408 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.408 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.408 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.408 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:21.409 21:06:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.015 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:28.016 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:28.016 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:28.016 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:28.016 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:28.016 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.016 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:28.016 altname enp217s0f0np0 00:19:28.016 altname ens818f0np0 00:19:28.016 inet 192.168.100.8/24 scope global mlx_0_0 00:19:28.016 valid_lft forever preferred_lft forever 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:28.016 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.016 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:28.016 altname enp217s0f1np1 00:19:28.016 altname ens818f1np1 00:19:28.016 inet 192.168.100.9/24 scope global mlx_0_1 00:19:28.016 valid_lft forever preferred_lft forever 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:28.016 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:28.017 192.168.100.9' 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:28.017 192.168.100.9' 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:28.017 192.168.100.9' 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:28.017 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3550272 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3550272 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3550272 ']' 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:28.277 21:06:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:28.277 [2024-07-13 21:06:18.974174] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:28.277 [2024-07-13 21:06:18.974231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.277 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.277 [2024-07-13 21:06:19.046793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:28.277 [2024-07-13 21:06:19.089880] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.277 [2024-07-13 21:06:19.089922] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.277 [2024-07-13 21:06:19.089933] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.277 [2024-07-13 21:06:19.089943] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.277 [2024-07-13 21:06:19.089951] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.277 [2024-07-13 21:06:19.090001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.277 [2024-07-13 21:06:19.094027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.277 [2024-07-13 21:06:19.094049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.277 [2024-07-13 21:06:19.094052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.215 [2024-07-13 21:06:19.848746] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x142bc80/0x1430170) succeed. 00:19:29.215 [2024-07-13 21:06:19.859233] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x142d2c0/0x1471800) succeed. 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.215 Malloc0 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.215 21:06:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.215 [2024-07-13 21:06:20.026362] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.215 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:29.215 test case1: single bdev can't be used in multiple subsystems 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 [2024-07-13 21:06:20.050148] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:29.216 [2024-07-13 21:06:20.050170] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:29.216 [2024-07-13 21:06:20.050180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.216 request: 00:19:29.216 { 00:19:29.216 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:29.216 "namespace": { 00:19:29.216 "bdev_name": "Malloc0", 00:19:29.216 "no_auto_visible": false 00:19:29.216 }, 00:19:29.216 "method": "nvmf_subsystem_add_ns", 00:19:29.216 "req_id": 1 00:19:29.216 } 00:19:29.216 Got JSON-RPC error response 00:19:29.216 response: 00:19:29.216 { 00:19:29.216 "code": -32602, 00:19:29.216 "message": "Invalid parameters" 00:19:29.216 } 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:29.216 Adding namespace failed - expected result. 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:29.216 test case2: host connect to nvmf target in multiple paths 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 [2024-07-13 21:06:20.066209] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.216 21:06:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:30.151 21:06:21 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:31.538 21:06:21 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:31.538 21:06:21 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:19:31.538 21:06:21 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:31.538 21:06:21 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:31.538 21:06:21 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:19:33.451 21:06:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:33.451 21:06:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:33.451 21:06:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:33.451 21:06:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:33.451 21:06:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:33.451 21:06:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:19:33.451 21:06:24 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:33.451 [global] 00:19:33.451 thread=1 00:19:33.451 invalidate=1 00:19:33.451 rw=write 00:19:33.451 time_based=1 00:19:33.451 runtime=1 00:19:33.451 ioengine=libaio 00:19:33.451 direct=1 00:19:33.451 bs=4096 00:19:33.451 iodepth=1 00:19:33.451 norandommap=0 00:19:33.451 numjobs=1 00:19:33.451 00:19:33.451 verify_dump=1 00:19:33.451 verify_backlog=512 00:19:33.451 verify_state_save=0 00:19:33.451 do_verify=1 00:19:33.451 verify=crc32c-intel 00:19:33.451 [job0] 00:19:33.451 filename=/dev/nvme0n1 00:19:33.451 Could not set queue depth (nvme0n1) 00:19:33.711 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:33.711 fio-3.35 00:19:33.711 Starting 1 thread 00:19:34.649 00:19:34.649 job0: (groupid=0, jobs=1): err= 0: pid=3551408: Sat Jul 13 21:06:25 2024 00:19:34.649 read: IOPS=7024, BW=27.4MiB/s (28.8MB/s)(27.5MiB/1001msec) 00:19:34.649 slat (nsec): min=8248, max=32171, avg=8971.40, stdev=910.99 00:19:34.649 clat (nsec): min=45094, max=86916, avg=59866.72, stdev=3613.52 00:19:34.649 lat (nsec): min=59240, max=95380, avg=68838.12, stdev=3663.94 00:19:34.649 clat percentiles (nsec): 00:19:34.649 | 1.00th=[52480], 5.00th=[54016], 10.00th=[55552], 20.00th=[56576], 00:19:34.649 | 30.00th=[57600], 40.00th=[58624], 50.00th=[59648], 60.00th=[60672], 00:19:34.649 | 70.00th=[61696], 80.00th=[62720], 90.00th=[64256], 95.00th=[66048], 00:19:34.649 | 99.00th=[69120], 99.50th=[70144], 99.90th=[73216], 99.95th=[74240], 00:19:34.649 | 99.99th=[86528] 00:19:34.649 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:19:34.649 slat (nsec): min=8485, max=42393, avg=10657.85, stdev=1039.56 00:19:34.649 clat (usec): min=35, max=133, avg=57.64, stdev= 3.88 00:19:34.649 lat (usec): min=56, max=176, avg=68.30, stdev= 4.05 00:19:34.649 clat percentiles (usec): 00:19:34.649 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:19:34.649 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:19:34.649 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 63], 95.00th=[ 64], 00:19:34.649 | 99.00th=[ 68], 99.50th=[ 69], 99.90th=[ 72], 99.95th=[ 78], 00:19:34.649 | 99.99th=[ 135] 00:19:34.649 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:19:34.649 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:19:34.649 lat (usec) : 50=0.38%, 100=99.61%, 250=0.01% 00:19:34.649 cpu : usr=9.30%, sys=19.00%, ctx=14200, majf=0, minf=2 00:19:34.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.649 issued rwts: total=7032,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:34.649 00:19:34.649 Run status group 0 (all jobs): 00:19:34.649 READ: bw=27.4MiB/s (28.8MB/s), 27.4MiB/s-27.4MiB/s (28.8MB/s-28.8MB/s), io=27.5MiB (28.8MB), run=1001-1001msec 00:19:34.649 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:19:34.649 00:19:34.649 Disk stats (read/write): 00:19:34.649 nvme0n1: ios=6193/6620, merge=0/0, ticks=310/325, in_queue=635, util=90.68% 00:19:34.649 21:06:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:36.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:36.553 21:06:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:36.553 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:19:36.553 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:36.553 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:36.553 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:36.553 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:36.812 rmmod nvme_rdma 00:19:36.812 rmmod nvme_fabrics 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3550272 ']' 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3550272 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3550272 ']' 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3550272 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3550272 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3550272' 00:19:36.812 killing process with pid 3550272 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3550272 00:19:36.812 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3550272 00:19:37.071 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:37.071 21:06:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:37.071 00:19:37.071 real 0m15.764s 00:19:37.071 user 0m45.724s 00:19:37.071 sys 0m6.145s 00:19:37.071 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:37.071 21:06:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.071 ************************************ 00:19:37.071 END TEST nvmf_nmic 00:19:37.071 ************************************ 00:19:37.071 21:06:27 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:37.071 21:06:27 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:37.071 21:06:27 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:37.071 21:06:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:37.071 ************************************ 00:19:37.071 START TEST nvmf_fio_target 00:19:37.071 ************************************ 00:19:37.071 21:06:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:37.331 * Looking for test storage... 00:19:37.331 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:37.331 21:06:28 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:43.899 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:43.899 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:43.899 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:43.899 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:43.899 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:43.900 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:43.900 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:43.900 altname enp217s0f0np0 00:19:43.900 altname ens818f0np0 00:19:43.900 inet 192.168.100.8/24 scope global mlx_0_0 00:19:43.900 valid_lft forever preferred_lft forever 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:43.900 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:43.900 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:43.900 altname enp217s0f1np1 00:19:43.900 altname ens818f1np1 00:19:43.900 inet 192.168.100.9/24 scope global mlx_0_1 00:19:43.900 valid_lft forever preferred_lft forever 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:43.900 192.168.100.9' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:43.900 192.168.100.9' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:43.900 192.168.100.9' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3555207 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3555207 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3555207 ']' 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.900 21:06:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:43.900 [2024-07-13 21:06:34.565563] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:43.900 [2024-07-13 21:06:34.565616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.900 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.900 [2024-07-13 21:06:34.636604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.900 [2024-07-13 21:06:34.676307] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.900 [2024-07-13 21:06:34.676348] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.901 [2024-07-13 21:06:34.676358] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.901 [2024-07-13 21:06:34.676366] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.901 [2024-07-13 21:06:34.676389] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.901 [2024-07-13 21:06:34.676442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.901 [2024-07-13 21:06:34.676558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.901 [2024-07-13 21:06:34.676642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.901 [2024-07-13 21:06:34.676644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.836 21:06:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:44.836 21:06:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:19:44.836 21:06:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.836 21:06:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.836 21:06:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.836 21:06:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.836 21:06:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:44.836 [2024-07-13 21:06:35.608610] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22d5c80/0x22da170) succeed. 00:19:44.836 [2024-07-13 21:06:35.619221] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22d72c0/0x231b800) succeed. 00:19:45.096 21:06:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:45.096 21:06:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:45.096 21:06:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:45.355 21:06:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:45.355 21:06:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:45.613 21:06:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:45.613 21:06:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:45.872 21:06:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:45.872 21:06:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:45.872 21:06:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:46.131 21:06:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:46.131 21:06:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:46.396 21:06:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:46.396 21:06:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:46.654 21:06:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:46.654 21:06:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:46.654 21:06:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:46.912 21:06:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:46.912 21:06:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:47.171 21:06:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:47.171 21:06:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:47.171 21:06:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:47.429 [2024-07-13 21:06:38.199158] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:47.429 21:06:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:47.688 21:06:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:47.946 21:06:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:48.881 21:06:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:48.881 21:06:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:48.881 21:06:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:48.881 21:06:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:48.881 21:06:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:48.881 21:06:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:50.817 21:06:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:50.817 21:06:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:50.817 21:06:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:50.817 21:06:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:50.817 21:06:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:50.817 21:06:41 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:50.817 21:06:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:50.817 [global] 00:19:50.817 thread=1 00:19:50.817 invalidate=1 00:19:50.817 rw=write 00:19:50.817 time_based=1 00:19:50.817 runtime=1 00:19:50.817 ioengine=libaio 00:19:50.817 direct=1 00:19:50.817 bs=4096 00:19:50.817 iodepth=1 00:19:50.817 norandommap=0 00:19:50.817 numjobs=1 00:19:50.817 00:19:50.817 verify_dump=1 00:19:50.817 verify_backlog=512 00:19:50.817 verify_state_save=0 00:19:50.817 do_verify=1 00:19:50.818 verify=crc32c-intel 00:19:50.818 [job0] 00:19:50.818 filename=/dev/nvme0n1 00:19:50.818 [job1] 00:19:50.818 filename=/dev/nvme0n2 00:19:50.818 [job2] 00:19:50.818 filename=/dev/nvme0n3 00:19:50.818 [job3] 00:19:50.818 filename=/dev/nvme0n4 00:19:51.077 Could not set queue depth (nvme0n1) 00:19:51.077 Could not set queue depth (nvme0n2) 00:19:51.077 Could not set queue depth (nvme0n3) 00:19:51.077 Could not set queue depth (nvme0n4) 00:19:51.334 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.335 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.335 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.335 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:51.335 fio-3.35 00:19:51.335 Starting 4 threads 00:19:52.712 00:19:52.712 job0: (groupid=0, jobs=1): err= 0: pid=3556647: Sat Jul 13 21:06:43 2024 00:19:52.712 read: IOPS=3794, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec) 00:19:52.712 slat (nsec): min=8200, max=31299, avg=9121.85, stdev=876.28 00:19:52.712 clat (usec): min=69, max=191, avg=119.21, stdev=14.60 00:19:52.712 lat (usec): min=78, max=199, avg=128.33, stdev=14.59 00:19:52.712 clat percentiles (usec): 00:19:52.712 | 1.00th=[ 78], 5.00th=[ 90], 10.00th=[ 103], 20.00th=[ 111], 00:19:52.712 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 123], 00:19:52.712 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 141], 00:19:52.712 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 184], 00:19:52.712 | 99.99th=[ 192] 00:19:52.712 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:19:52.712 slat (nsec): min=10076, max=68864, avg=10934.92, stdev=1417.20 00:19:52.712 clat (usec): min=65, max=276, avg=110.18, stdev=17.27 00:19:52.712 lat (usec): min=76, max=286, avg=121.12, stdev=17.30 00:19:52.712 clat percentiles (usec): 00:19:52.712 | 1.00th=[ 72], 5.00th=[ 78], 10.00th=[ 83], 20.00th=[ 98], 00:19:52.712 | 30.00th=[ 105], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 116], 00:19:52.712 | 70.00th=[ 119], 80.00th=[ 123], 90.00th=[ 128], 95.00th=[ 133], 00:19:52.712 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 184], 00:19:52.712 | 99.99th=[ 277] 00:19:52.712 bw ( KiB/s): min=16384, max=16384, per=22.55%, avg=16384.00, stdev= 0.00, samples=1 00:19:52.712 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:52.712 lat (usec) : 100=15.29%, 250=84.70%, 500=0.01% 00:19:52.712 cpu : usr=6.20%, sys=10.20%, ctx=7895, majf=0, minf=1 00:19:52.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.712 issued rwts: total=3798,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:52.712 job1: (groupid=0, jobs=1): err= 0: pid=3556661: Sat Jul 13 21:06:43 2024 00:19:52.712 read: IOPS=4212, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1001msec) 00:19:52.712 slat (nsec): min=8217, max=34316, avg=9067.70, stdev=933.87 00:19:52.712 clat (usec): min=67, max=175, avg=104.93, stdev=20.36 00:19:52.712 lat (usec): min=77, max=185, avg=113.99, stdev=20.46 00:19:52.712 clat percentiles (usec): 00:19:52.712 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 82], 00:19:52.712 | 30.00th=[ 87], 40.00th=[ 102], 50.00th=[ 112], 60.00th=[ 116], 00:19:52.712 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 128], 95.00th=[ 133], 00:19:52.712 | 99.00th=[ 145], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 174], 00:19:52.712 | 99.99th=[ 176] 00:19:52.712 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:19:52.712 slat (nsec): min=10073, max=45057, avg=10852.54, stdev=1273.64 00:19:52.712 clat (usec): min=62, max=451, avg=98.02, stdev=20.07 00:19:52.712 lat (usec): min=74, max=461, avg=108.87, stdev=20.20 00:19:52.712 clat percentiles (usec): 00:19:52.712 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 78], 00:19:52.712 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 101], 60.00th=[ 109], 00:19:52.712 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 126], 00:19:52.712 | 99.00th=[ 145], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 167], 00:19:52.712 | 99.99th=[ 453] 00:19:52.712 bw ( KiB/s): min=20480, max=20480, per=28.19%, avg=20480.00, stdev= 0.00, samples=1 00:19:52.712 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:52.712 lat (usec) : 100=44.03%, 250=55.95%, 500=0.01% 00:19:52.712 cpu : usr=8.00%, sys=10.10%, ctx=8825, majf=0, minf=1 00:19:52.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.712 issued rwts: total=4217,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:52.712 job2: (groupid=0, jobs=1): err= 0: pid=3556680: Sat Jul 13 21:06:43 2024 00:19:52.712 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:19:52.712 slat (nsec): min=8373, max=26112, avg=8998.65, stdev=822.64 00:19:52.712 clat (usec): min=72, max=181, avg=98.59, stdev=17.06 00:19:52.712 lat (usec): min=81, max=190, avg=107.59, stdev=17.16 00:19:52.712 clat percentiles (usec): 00:19:52.712 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:19:52.712 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:19:52.712 | 70.00th=[ 98], 80.00th=[ 110], 90.00th=[ 130], 95.00th=[ 137], 00:19:52.712 | 99.00th=[ 145], 99.50th=[ 151], 99.90th=[ 174], 99.95th=[ 180], 00:19:52.712 | 99.99th=[ 182] 00:19:52.712 write: IOPS=4618, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1001msec); 0 zone resets 00:19:52.712 slat (nsec): min=10242, max=38514, avg=10995.16, stdev=1055.93 00:19:52.712 clat (usec): min=71, max=282, avg=94.32, stdev=15.62 00:19:52.712 lat (usec): min=82, max=292, avg=105.32, stdev=15.69 00:19:52.712 clat percentiles (usec): 00:19:52.712 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:19:52.712 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:19:52.712 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 122], 95.00th=[ 127], 00:19:52.712 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 169], 99.95th=[ 169], 00:19:52.712 | 99.99th=[ 281] 00:19:52.712 bw ( KiB/s): min=17864, max=17864, per=24.59%, avg=17864.00, stdev= 0.00, samples=1 00:19:52.712 iops : min= 4466, max= 4466, avg=4466.00, stdev= 0.00, samples=1 00:19:52.712 lat (usec) : 100=75.84%, 250=24.15%, 500=0.01% 00:19:52.712 cpu : usr=6.20%, sys=12.60%, ctx=9231, majf=0, minf=2 00:19:52.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.712 issued rwts: total=4608,4623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:52.712 job3: (groupid=0, jobs=1): err= 0: pid=3556687: Sat Jul 13 21:06:43 2024 00:19:52.712 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:19:52.712 slat (nsec): min=8384, max=30038, avg=9287.01, stdev=870.86 00:19:52.712 clat (usec): min=76, max=131, avg=95.15, stdev= 7.17 00:19:52.712 lat (usec): min=85, max=141, avg=104.44, stdev= 7.23 00:19:52.712 clat percentiles (usec): 00:19:52.712 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:19:52.712 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 97], 00:19:52.712 | 70.00th=[ 98], 80.00th=[ 101], 90.00th=[ 105], 95.00th=[ 109], 00:19:52.712 | 99.00th=[ 116], 99.50th=[ 119], 99.90th=[ 124], 99.95th=[ 128], 00:19:52.712 | 99.99th=[ 133] 00:19:52.712 write: IOPS=4850, BW=18.9MiB/s (19.9MB/s)(19.0MiB/1001msec); 0 zone resets 00:19:52.712 slat (nsec): min=10190, max=41298, avg=11021.63, stdev=1007.72 00:19:52.712 clat (usec): min=73, max=279, avg=92.06, stdev= 7.61 00:19:52.712 lat (usec): min=83, max=300, avg=103.08, stdev= 7.76 00:19:52.712 clat percentiles (usec): 00:19:52.712 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:19:52.712 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 93], 00:19:52.712 | 70.00th=[ 95], 80.00th=[ 98], 90.00th=[ 102], 95.00th=[ 105], 00:19:52.712 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 123], 99.95th=[ 130], 00:19:52.712 | 99.99th=[ 281] 00:19:52.712 bw ( KiB/s): min=20480, max=20480, per=28.19%, avg=20480.00, stdev= 0.00, samples=1 00:19:52.713 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:52.713 lat (usec) : 100=82.08%, 250=17.91%, 500=0.01% 00:19:52.713 cpu : usr=6.20%, sys=13.30%, ctx=9463, majf=0, minf=1 00:19:52.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.713 issued rwts: total=4608,4855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.713 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:52.713 00:19:52.713 Run status group 0 (all jobs): 00:19:52.713 READ: bw=67.2MiB/s (70.5MB/s), 14.8MiB/s-18.0MiB/s (15.5MB/s-18.9MB/s), io=67.3MiB (70.6MB), run=1001-1001msec 00:19:52.713 WRITE: bw=71.0MiB/s (74.4MB/s), 16.0MiB/s-18.9MiB/s (16.8MB/s-19.9MB/s), io=71.0MiB (74.5MB), run=1001-1001msec 00:19:52.713 00:19:52.713 Disk stats (read/write): 00:19:52.713 nvme0n1: ios=3121/3494, merge=0/0, ticks=357/353, in_queue=710, util=83.97% 00:19:52.713 nvme0n2: ios=3584/3913, merge=0/0, ticks=324/334, in_queue=658, util=85.09% 00:19:52.713 nvme0n3: ios=3584/3992, merge=0/0, ticks=335/337, in_queue=672, util=88.34% 00:19:52.713 nvme0n4: ios=3771/4096, merge=0/0, ticks=324/355, in_queue=679, util=89.47% 00:19:52.713 21:06:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:52.713 [global] 00:19:52.713 thread=1 00:19:52.713 invalidate=1 00:19:52.713 rw=randwrite 00:19:52.713 time_based=1 00:19:52.713 runtime=1 00:19:52.713 ioengine=libaio 00:19:52.713 direct=1 00:19:52.713 bs=4096 00:19:52.713 iodepth=1 00:19:52.713 norandommap=0 00:19:52.713 numjobs=1 00:19:52.713 00:19:52.713 verify_dump=1 00:19:52.713 verify_backlog=512 00:19:52.713 verify_state_save=0 00:19:52.713 do_verify=1 00:19:52.713 verify=crc32c-intel 00:19:52.713 [job0] 00:19:52.713 filename=/dev/nvme0n1 00:19:52.713 [job1] 00:19:52.713 filename=/dev/nvme0n2 00:19:52.713 [job2] 00:19:52.713 filename=/dev/nvme0n3 00:19:52.713 [job3] 00:19:52.713 filename=/dev/nvme0n4 00:19:52.713 Could not set queue depth (nvme0n1) 00:19:52.713 Could not set queue depth (nvme0n2) 00:19:52.713 Could not set queue depth (nvme0n3) 00:19:52.713 Could not set queue depth (nvme0n4) 00:19:52.713 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:52.713 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:52.713 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:52.713 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:52.713 fio-3.35 00:19:52.713 Starting 4 threads 00:19:54.092 00:19:54.092 job0: (groupid=0, jobs=1): err= 0: pid=3557060: Sat Jul 13 21:06:44 2024 00:19:54.092 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:19:54.092 slat (nsec): min=8039, max=33511, avg=8856.71, stdev=972.90 00:19:54.092 clat (usec): min=61, max=270, avg=84.03, stdev=20.30 00:19:54.092 lat (usec): min=72, max=278, avg=92.89, stdev=20.34 00:19:54.092 clat percentiles (usec): 00:19:54.092 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:19:54.092 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 80], 00:19:54.092 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 143], 00:19:54.092 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 172], 99.95th=[ 180], 00:19:54.092 | 99.99th=[ 269] 00:19:54.092 write: IOPS=5407, BW=21.1MiB/s (22.1MB/s)(21.1MiB/1001msec); 0 zone resets 00:19:54.092 slat (nsec): min=9836, max=41833, avg=10457.24, stdev=1110.53 00:19:54.092 clat (usec): min=57, max=279, avg=82.80, stdev=21.99 00:19:54.092 lat (usec): min=69, max=289, avg=93.26, stdev=22.14 00:19:54.092 clat percentiles (usec): 00:19:54.092 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 72], 00:19:54.092 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 76], 60.00th=[ 77], 00:19:54.092 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 135], 95.00th=[ 141], 00:19:54.092 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 174], 00:19:54.092 | 99.99th=[ 281] 00:19:54.092 bw ( KiB/s): min=24576, max=24576, per=41.66%, avg=24576.00, stdev= 0.00, samples=1 00:19:54.092 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:19:54.092 lat (usec) : 100=89.13%, 250=10.85%, 500=0.02% 00:19:54.092 cpu : usr=7.30%, sys=13.80%, ctx=10533, majf=0, minf=1 00:19:54.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.092 issued rwts: total=5120,5413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.092 job1: (groupid=0, jobs=1): err= 0: pid=3557072: Sat Jul 13 21:06:44 2024 00:19:54.092 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:19:54.092 slat (nsec): min=8004, max=30220, avg=8708.29, stdev=947.12 00:19:54.092 clat (usec): min=76, max=219, avg=153.38, stdev=12.17 00:19:54.092 lat (usec): min=84, max=227, avg=162.09, stdev=12.17 00:19:54.092 clat percentiles (usec): 00:19:54.092 | 1.00th=[ 105], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:19:54.092 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:19:54.092 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 169], 00:19:54.092 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 215], 99.95th=[ 219], 00:19:54.092 | 99.99th=[ 219] 00:19:54.092 write: IOPS=3135, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1001msec); 0 zone resets 00:19:54.092 slat (nsec): min=9872, max=37500, avg=10854.71, stdev=1184.27 00:19:54.092 clat (usec): min=70, max=211, avg=144.34, stdev=14.10 00:19:54.092 lat (usec): min=80, max=222, avg=155.19, stdev=14.11 00:19:54.092 clat percentiles (usec): 00:19:54.092 | 1.00th=[ 86], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 139], 00:19:54.092 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:19:54.092 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 161], 00:19:54.092 | 99.00th=[ 182], 99.50th=[ 194], 99.90th=[ 206], 99.95th=[ 208], 00:19:54.092 | 99.99th=[ 212] 00:19:54.092 bw ( KiB/s): min=12296, max=12296, per=20.84%, avg=12296.00, stdev= 0.00, samples=1 00:19:54.092 iops : min= 3074, max= 3074, avg=3074.00, stdev= 0.00, samples=1 00:19:54.092 lat (usec) : 100=1.66%, 250=98.34% 00:19:54.092 cpu : usr=3.20%, sys=9.80%, ctx=6211, majf=0, minf=2 00:19:54.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.092 issued rwts: total=3072,3139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.092 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.092 job2: (groupid=0, jobs=1): err= 0: pid=3557091: Sat Jul 13 21:06:44 2024 00:19:54.092 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:19:54.092 slat (nsec): min=8196, max=27443, avg=9281.27, stdev=1063.32 00:19:54.093 clat (usec): min=78, max=226, avg=153.54, stdev=12.28 00:19:54.093 lat (usec): min=87, max=236, avg=162.82, stdev=12.40 00:19:54.093 clat percentiles (usec): 00:19:54.093 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 145], 00:19:54.093 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:19:54.093 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 172], 00:19:54.093 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 219], 99.95th=[ 221], 00:19:54.093 | 99.99th=[ 227] 00:19:54.093 write: IOPS=3085, BW=12.1MiB/s (12.6MB/s)(12.1MiB/1001msec); 0 zone resets 00:19:54.093 slat (nsec): min=10104, max=38549, avg=11472.82, stdev=1259.76 00:19:54.093 clat (usec): min=77, max=226, avg=145.19, stdev=13.45 00:19:54.093 lat (usec): min=89, max=236, avg=156.66, stdev=13.57 00:19:54.093 clat percentiles (usec): 00:19:54.093 | 1.00th=[ 92], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:19:54.093 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:19:54.093 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 163], 00:19:54.093 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 206], 99.95th=[ 212], 00:19:54.093 | 99.99th=[ 227] 00:19:54.093 bw ( KiB/s): min=12288, max=12288, per=20.83%, avg=12288.00, stdev= 0.00, samples=1 00:19:54.093 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:54.093 lat (usec) : 100=1.12%, 250=98.88% 00:19:54.093 cpu : usr=4.60%, sys=8.00%, ctx=6161, majf=0, minf=1 00:19:54.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.093 issued rwts: total=3072,3089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.093 job3: (groupid=0, jobs=1): err= 0: pid=3557098: Sat Jul 13 21:06:44 2024 00:19:54.093 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:19:54.093 slat (nsec): min=8116, max=18215, avg=8829.47, stdev=702.05 00:19:54.093 clat (usec): min=81, max=241, avg=153.54, stdev=13.40 00:19:54.093 lat (usec): min=90, max=249, avg=162.37, stdev=13.41 00:19:54.093 clat percentiles (usec): 00:19:54.093 | 1.00th=[ 96], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:19:54.093 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:19:54.093 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 172], 00:19:54.093 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 225], 99.95th=[ 235], 00:19:54.093 | 99.99th=[ 241] 00:19:54.093 write: IOPS=3118, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:19:54.093 slat (nsec): min=10069, max=38915, avg=11127.29, stdev=1219.89 00:19:54.093 clat (usec): min=73, max=212, avg=144.65, stdev=14.36 00:19:54.093 lat (usec): min=84, max=222, avg=155.78, stdev=14.38 00:19:54.093 clat percentiles (usec): 00:19:54.093 | 1.00th=[ 89], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 137], 00:19:54.093 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:19:54.093 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:19:54.093 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 202], 99.95th=[ 206], 00:19:54.093 | 99.99th=[ 212] 00:19:54.093 bw ( KiB/s): min=12288, max=12288, per=20.83%, avg=12288.00, stdev= 0.00, samples=1 00:19:54.093 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:54.093 lat (usec) : 100=2.07%, 250=97.93% 00:19:54.093 cpu : usr=4.70%, sys=8.50%, ctx=6194, majf=0, minf=1 00:19:54.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.093 issued rwts: total=3072,3122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:54.093 00:19:54.093 Run status group 0 (all jobs): 00:19:54.093 READ: bw=55.9MiB/s (58.7MB/s), 12.0MiB/s-20.0MiB/s (12.6MB/s-20.9MB/s), io=56.0MiB (58.7MB), run=1001-1001msec 00:19:54.093 WRITE: bw=57.6MiB/s (60.4MB/s), 12.1MiB/s-21.1MiB/s (12.6MB/s-22.1MB/s), io=57.7MiB (60.5MB), run=1001-1001msec 00:19:54.093 00:19:54.093 Disk stats (read/write): 00:19:54.093 nvme0n1: ios=4657/4834, merge=0/0, ticks=316/329, in_queue=645, util=84.07% 00:19:54.093 nvme0n2: ios=2558/2560, merge=0/0, ticks=372/344, in_queue=716, util=85.10% 00:19:54.093 nvme0n3: ios=2509/2560, merge=0/0, ticks=371/357, in_queue=728, util=88.35% 00:19:54.093 nvme0n4: ios=2542/2560, merge=0/0, ticks=372/351, in_queue=723, util=89.48% 00:19:54.093 21:06:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:54.093 [global] 00:19:54.093 thread=1 00:19:54.093 invalidate=1 00:19:54.093 rw=write 00:19:54.093 time_based=1 00:19:54.093 runtime=1 00:19:54.093 ioengine=libaio 00:19:54.093 direct=1 00:19:54.093 bs=4096 00:19:54.093 iodepth=128 00:19:54.093 norandommap=0 00:19:54.093 numjobs=1 00:19:54.093 00:19:54.093 verify_dump=1 00:19:54.093 verify_backlog=512 00:19:54.093 verify_state_save=0 00:19:54.093 do_verify=1 00:19:54.093 verify=crc32c-intel 00:19:54.093 [job0] 00:19:54.093 filename=/dev/nvme0n1 00:19:54.093 [job1] 00:19:54.093 filename=/dev/nvme0n2 00:19:54.093 [job2] 00:19:54.093 filename=/dev/nvme0n3 00:19:54.093 [job3] 00:19:54.093 filename=/dev/nvme0n4 00:19:54.093 Could not set queue depth (nvme0n1) 00:19:54.093 Could not set queue depth (nvme0n2) 00:19:54.093 Could not set queue depth (nvme0n3) 00:19:54.093 Could not set queue depth (nvme0n4) 00:19:54.352 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:54.352 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:54.352 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:54.352 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:54.352 fio-3.35 00:19:54.352 Starting 4 threads 00:19:55.732 00:19:55.732 job0: (groupid=0, jobs=1): err= 0: pid=3557485: Sat Jul 13 21:06:46 2024 00:19:55.732 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:19:55.732 slat (usec): min=2, max=2616, avg=59.46, stdev=226.14 00:19:55.732 clat (usec): min=4017, max=15947, avg=7859.06, stdev=2372.34 00:19:55.732 lat (usec): min=4024, max=15956, avg=7918.51, stdev=2381.52 00:19:55.732 clat percentiles (usec): 00:19:55.732 | 1.00th=[ 6063], 5.00th=[ 6521], 10.00th=[ 6783], 20.00th=[ 6915], 00:19:55.732 | 30.00th=[ 7046], 40.00th=[ 7111], 50.00th=[ 7177], 60.00th=[ 7242], 00:19:55.732 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 8356], 95.00th=[15401], 00:19:55.732 | 99.00th=[15533], 99.50th=[15533], 99.90th=[15795], 99.95th=[15795], 00:19:55.732 | 99.99th=[15926] 00:19:55.732 write: IOPS=8401, BW=32.8MiB/s (34.4MB/s)(32.9MiB/1003msec); 0 zone resets 00:19:55.732 slat (usec): min=2, max=2594, avg=57.36, stdev=213.87 00:19:55.732 clat (usec): min=1591, max=16904, avg=7438.50, stdev=2326.80 00:19:55.732 lat (usec): min=4094, max=17036, avg=7495.86, stdev=2337.73 00:19:55.732 clat percentiles (usec): 00:19:55.732 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 6652], 00:19:55.732 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6849], 00:19:55.732 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[15270], 00:19:55.732 | 99.00th=[15533], 99.50th=[15533], 99.90th=[16909], 99.95th=[16909], 00:19:55.732 | 99.99th=[16909] 00:19:55.732 bw ( KiB/s): min=29536, max=36864, per=31.66%, avg=33200.00, stdev=5181.68, samples=2 00:19:55.732 iops : min= 7384, max= 9216, avg=8300.00, stdev=1295.42, samples=2 00:19:55.732 lat (msec) : 2=0.01%, 10=91.15%, 20=8.85% 00:19:55.732 cpu : usr=4.69%, sys=5.89%, ctx=1243, majf=0, minf=1 00:19:55.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:55.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:55.733 issued rwts: total=8192,8427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:55.733 job1: (groupid=0, jobs=1): err= 0: pid=3557490: Sat Jul 13 21:06:46 2024 00:19:55.733 read: IOPS=9080, BW=35.5MiB/s (37.2MB/s)(35.6MiB/1005msec) 00:19:55.733 slat (usec): min=2, max=1074, avg=53.70, stdev=194.86 00:19:55.733 clat (usec): min=4139, max=11042, avg=7141.88, stdev=432.66 00:19:55.733 lat (usec): min=4902, max=11933, avg=7195.58, stdev=428.45 00:19:55.733 clat percentiles (usec): 00:19:55.733 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6718], 20.00th=[ 6915], 00:19:55.733 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7177], 60.00th=[ 7242], 00:19:55.733 | 70.00th=[ 7308], 80.00th=[ 7373], 90.00th=[ 7504], 95.00th=[ 7635], 00:19:55.733 | 99.00th=[ 8160], 99.50th=[ 9241], 99.90th=[10945], 99.95th=[11076], 00:19:55.733 | 99.99th=[11076] 00:19:55.733 write: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(36.0MiB/1005msec); 0 zone resets 00:19:55.733 slat (usec): min=2, max=1691, avg=51.96, stdev=188.01 00:19:55.733 clat (usec): min=3007, max=8270, avg=6766.46, stdev=459.79 00:19:55.733 lat (usec): min=3022, max=8281, avg=6818.42, stdev=459.77 00:19:55.733 clat percentiles (usec): 00:19:55.733 | 1.00th=[ 5211], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 6652], 00:19:55.733 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6783], 60.00th=[ 6849], 00:19:55.733 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7177], 95.00th=[ 7308], 00:19:55.733 | 99.00th=[ 7635], 99.50th=[ 7767], 99.90th=[ 7963], 99.95th=[ 8029], 00:19:55.733 | 99.99th=[ 8291] 00:19:55.733 bw ( KiB/s): min=36864, max=36864, per=35.15%, avg=36864.00, stdev= 0.00, samples=2 00:19:55.733 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:19:55.733 lat (msec) : 4=0.21%, 10=99.62%, 20=0.17% 00:19:55.733 cpu : usr=4.08%, sys=7.37%, ctx=1187, majf=0, minf=1 00:19:55.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:55.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:55.733 issued rwts: total=9126,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:55.733 job2: (groupid=0, jobs=1): err= 0: pid=3557509: Sat Jul 13 21:06:46 2024 00:19:55.733 read: IOPS=4239, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1004msec) 00:19:55.733 slat (usec): min=2, max=2238, avg=113.01, stdev=329.35 00:19:55.733 clat (usec): min=2519, max=19380, avg=14717.07, stdev=3870.65 00:19:55.733 lat (usec): min=3853, max=19714, avg=14830.07, stdev=3899.28 00:19:55.733 clat percentiles (usec): 00:19:55.733 | 1.00th=[ 6194], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8848], 00:19:55.733 | 30.00th=[15926], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:19:55.733 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17433], 95.00th=[17695], 00:19:55.733 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:19:55.733 | 99.99th=[19268] 00:19:55.733 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:19:55.733 slat (usec): min=2, max=2762, avg=109.02, stdev=323.02 00:19:55.733 clat (usec): min=6997, max=19140, avg=13977.72, stdev=3675.20 00:19:55.733 lat (usec): min=7001, max=19284, avg=14086.74, stdev=3705.54 00:19:55.733 clat percentiles (usec): 00:19:55.733 | 1.00th=[ 7504], 5.00th=[ 8291], 10.00th=[ 8356], 20.00th=[ 8586], 00:19:55.733 | 30.00th=[10683], 40.00th=[15795], 50.00th=[16188], 60.00th=[16319], 00:19:55.733 | 70.00th=[16581], 80.00th=[16712], 90.00th=[16909], 95.00th=[17171], 00:19:55.733 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18482], 99.95th=[18744], 00:19:55.733 | 99.99th=[19268] 00:19:55.733 bw ( KiB/s): min=16384, max=20480, per=17.58%, avg=18432.00, stdev=2896.31, samples=2 00:19:55.733 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:19:55.733 lat (msec) : 4=0.27%, 10=28.07%, 20=71.66% 00:19:55.733 cpu : usr=2.49%, sys=4.19%, ctx=1115, majf=0, minf=1 00:19:55.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:55.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:55.733 issued rwts: total=4256,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:55.733 job3: (groupid=0, jobs=1): err= 0: pid=3557515: Sat Jul 13 21:06:46 2024 00:19:55.733 read: IOPS=3670, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1003msec) 00:19:55.733 slat (usec): min=2, max=3549, avg=129.67, stdev=389.21 00:19:55.733 clat (usec): min=1590, max=20632, avg=16608.17, stdev=1646.26 00:19:55.733 lat (usec): min=3152, max=20635, avg=16737.83, stdev=1650.12 00:19:55.733 clat percentiles (usec): 00:19:55.733 | 1.00th=[ 8291], 5.00th=[15270], 10.00th=[15401], 20.00th=[16057], 00:19:55.733 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17171], 00:19:55.733 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17695], 00:19:55.733 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19530], 99.95th=[19792], 00:19:55.733 | 99.99th=[20579] 00:19:55.733 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:19:55.733 slat (usec): min=2, max=3111, avg=124.00, stdev=363.26 00:19:55.733 clat (usec): min=9898, max=20135, avg=16044.15, stdev=963.88 00:19:55.733 lat (usec): min=9908, max=20152, avg=16168.15, stdev=952.72 00:19:55.733 clat percentiles (usec): 00:19:55.733 | 1.00th=[11994], 5.00th=[14746], 10.00th=[15139], 20.00th=[15401], 00:19:55.733 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16188], 60.00th=[16450], 00:19:55.733 | 70.00th=[16581], 80.00th=[16712], 90.00th=[16909], 95.00th=[17171], 00:19:55.733 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19006], 99.95th=[19268], 00:19:55.733 | 99.99th=[20055] 00:19:55.733 bw ( KiB/s): min=16152, max=16384, per=15.51%, avg=16268.00, stdev=164.05, samples=2 00:19:55.733 iops : min= 4038, max= 4096, avg=4067.00, stdev=41.01, samples=2 00:19:55.733 lat (msec) : 2=0.01%, 4=0.04%, 10=0.95%, 20=98.96%, 50=0.04% 00:19:55.733 cpu : usr=2.30%, sys=3.59%, ctx=1270, majf=0, minf=1 00:19:55.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:55.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:55.733 issued rwts: total=3682,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:55.733 00:19:55.733 Run status group 0 (all jobs): 00:19:55.733 READ: bw=98.2MiB/s (103MB/s), 14.3MiB/s-35.5MiB/s (15.0MB/s-37.2MB/s), io=98.7MiB (103MB), run=1003-1005msec 00:19:55.733 WRITE: bw=102MiB/s (107MB/s), 16.0MiB/s-35.8MiB/s (16.7MB/s-37.6MB/s), io=103MiB (108MB), run=1003-1005msec 00:19:55.733 00:19:55.733 Disk stats (read/write): 00:19:55.733 nvme0n1: ios=7564/7680, merge=0/0, ticks=20960/19984, in_queue=40944, util=84.07% 00:19:55.733 nvme0n2: ios=7481/7680, merge=0/0, ticks=52552/50746, in_queue=103298, util=85.10% 00:19:55.733 nvme0n3: ios=3072/3245, merge=0/0, ticks=25770/25971, in_queue=51741, util=88.35% 00:19:55.733 nvme0n4: ios=3072/3294, merge=0/0, ticks=25734/26026, in_queue=51760, util=89.48% 00:19:55.733 21:06:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:55.733 [global] 00:19:55.733 thread=1 00:19:55.733 invalidate=1 00:19:55.733 rw=randwrite 00:19:55.733 time_based=1 00:19:55.733 runtime=1 00:19:55.733 ioengine=libaio 00:19:55.733 direct=1 00:19:55.733 bs=4096 00:19:55.733 iodepth=128 00:19:55.733 norandommap=0 00:19:55.733 numjobs=1 00:19:55.733 00:19:55.733 verify_dump=1 00:19:55.733 verify_backlog=512 00:19:55.733 verify_state_save=0 00:19:55.733 do_verify=1 00:19:55.733 verify=crc32c-intel 00:19:55.733 [job0] 00:19:55.733 filename=/dev/nvme0n1 00:19:55.733 [job1] 00:19:55.733 filename=/dev/nvme0n2 00:19:55.733 [job2] 00:19:55.733 filename=/dev/nvme0n3 00:19:55.733 [job3] 00:19:55.733 filename=/dev/nvme0n4 00:19:55.733 Could not set queue depth (nvme0n1) 00:19:55.733 Could not set queue depth (nvme0n2) 00:19:55.733 Could not set queue depth (nvme0n3) 00:19:55.733 Could not set queue depth (nvme0n4) 00:19:55.992 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:55.992 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:55.992 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:55.992 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:55.992 fio-3.35 00:19:55.992 Starting 4 threads 00:19:57.372 00:19:57.372 job0: (groupid=0, jobs=1): err= 0: pid=3557899: Sat Jul 13 21:06:48 2024 00:19:57.372 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:19:57.372 slat (usec): min=2, max=8109, avg=134.70, stdev=601.70 00:19:57.372 clat (usec): min=10535, max=30896, avg=17440.88, stdev=4045.86 00:19:57.372 lat (usec): min=10539, max=30931, avg=17575.58, stdev=4079.75 00:19:57.372 clat percentiles (usec): 00:19:57.372 | 1.00th=[11076], 5.00th=[12256], 10.00th=[12780], 20.00th=[14222], 00:19:57.372 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16909], 60.00th=[17957], 00:19:57.372 | 70.00th=[19006], 80.00th=[20055], 90.00th=[23200], 95.00th=[25822], 00:19:57.372 | 99.00th=[29230], 99.50th=[30016], 99.90th=[30278], 99.95th=[30278], 00:19:57.372 | 99.99th=[30802] 00:19:57.372 write: IOPS=3945, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1004msec); 0 zone resets 00:19:57.372 slat (usec): min=2, max=6340, avg=125.78, stdev=544.99 00:19:57.372 clat (usec): min=1621, max=27891, avg=16287.05, stdev=3287.18 00:19:57.372 lat (usec): min=4948, max=27902, avg=16412.83, stdev=3312.49 00:19:57.372 clat percentiles (usec): 00:19:57.372 | 1.00th=[ 7570], 5.00th=[11338], 10.00th=[12518], 20.00th=[13566], 00:19:57.372 | 30.00th=[14484], 40.00th=[15139], 50.00th=[16319], 60.00th=[17433], 00:19:57.372 | 70.00th=[18220], 80.00th=[19268], 90.00th=[20579], 95.00th=[21365], 00:19:57.372 | 99.00th=[22938], 99.50th=[22938], 99.90th=[27919], 99.95th=[27919], 00:19:57.372 | 99.99th=[27919] 00:19:57.372 bw ( KiB/s): min=14864, max=15800, per=16.52%, avg=15332.00, stdev=661.85, samples=2 00:19:57.372 iops : min= 3716, max= 3950, avg=3833.00, stdev=165.46, samples=2 00:19:57.372 lat (msec) : 2=0.01%, 10=1.31%, 20=81.87%, 50=16.81% 00:19:57.372 cpu : usr=3.09%, sys=3.99%, ctx=688, majf=0, minf=1 00:19:57.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:57.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.372 issued rwts: total=3584,3961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.372 job1: (groupid=0, jobs=1): err= 0: pid=3557908: Sat Jul 13 21:06:48 2024 00:19:57.372 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:19:57.372 slat (usec): min=2, max=5885, avg=83.27, stdev=343.79 00:19:57.372 clat (usec): min=4815, max=23188, avg=10759.44, stdev=4224.95 00:19:57.372 lat (usec): min=4818, max=23200, avg=10842.72, stdev=4247.23 00:19:57.372 clat percentiles (usec): 00:19:57.372 | 1.00th=[ 5211], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5997], 00:19:57.372 | 30.00th=[ 7177], 40.00th=[ 8979], 50.00th=[10683], 60.00th=[11994], 00:19:57.372 | 70.00th=[13173], 80.00th=[14353], 90.00th=[16581], 95.00th=[18482], 00:19:57.372 | 99.00th=[21627], 99.50th=[21627], 99.90th=[22938], 99.95th=[22938], 00:19:57.372 | 99.99th=[23200] 00:19:57.372 write: IOPS=5858, BW=22.9MiB/s (24.0MB/s)(22.9MiB/1001msec); 0 zone resets 00:19:57.372 slat (usec): min=2, max=5280, avg=85.76, stdev=375.76 00:19:57.372 clat (usec): min=367, max=25635, avg=11271.56, stdev=5087.66 00:19:57.372 lat (usec): min=970, max=26030, avg=11357.32, stdev=5115.84 00:19:57.372 clat percentiles (usec): 00:19:57.372 | 1.00th=[ 2966], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5932], 00:19:57.372 | 30.00th=[ 7570], 40.00th=[ 9634], 50.00th=[10683], 60.00th=[11994], 00:19:57.372 | 70.00th=[13304], 80.00th=[15008], 90.00th=[18482], 95.00th=[22414], 00:19:57.372 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:19:57.372 | 99.99th=[25560] 00:19:57.372 bw ( KiB/s): min=20480, max=20480, per=22.06%, avg=20480.00, stdev= 0.00, samples=1 00:19:57.372 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:57.372 lat (usec) : 500=0.01%, 1000=0.07% 00:19:57.372 lat (msec) : 2=0.19%, 4=0.41%, 10=43.93%, 20=50.87%, 50=4.52% 00:19:57.372 cpu : usr=3.30%, sys=6.30%, ctx=922, majf=0, minf=1 00:19:57.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:57.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.372 issued rwts: total=5632,5864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.372 job2: (groupid=0, jobs=1): err= 0: pid=3557935: Sat Jul 13 21:06:48 2024 00:19:57.372 read: IOPS=6231, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1004msec) 00:19:57.372 slat (usec): min=2, max=6753, avg=68.76, stdev=413.05 00:19:57.372 clat (usec): min=484, max=21934, avg=9812.02, stdev=3962.23 00:19:57.372 lat (usec): min=1050, max=22019, avg=9880.78, stdev=3994.03 00:19:57.372 clat percentiles (usec): 00:19:57.372 | 1.00th=[ 3621], 5.00th=[ 4228], 10.00th=[ 4948], 20.00th=[ 6194], 00:19:57.372 | 30.00th=[ 7635], 40.00th=[ 8356], 50.00th=[ 9241], 60.00th=[10290], 00:19:57.372 | 70.00th=[11600], 80.00th=[12911], 90.00th=[15008], 95.00th=[18220], 00:19:57.372 | 99.00th=[20055], 99.50th=[20317], 99.90th=[21103], 99.95th=[21627], 00:19:57.372 | 99.99th=[21890] 00:19:57.372 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:19:57.372 slat (usec): min=2, max=7958, avg=74.35, stdev=426.84 00:19:57.372 clat (usec): min=709, max=22866, avg=9870.32, stdev=4351.05 00:19:57.372 lat (usec): min=1069, max=22877, avg=9944.68, stdev=4384.67 00:19:57.372 clat percentiles (usec): 00:19:57.372 | 1.00th=[ 2769], 5.00th=[ 4113], 10.00th=[ 4686], 20.00th=[ 5800], 00:19:57.372 | 30.00th=[ 6980], 40.00th=[ 8455], 50.00th=[ 9503], 60.00th=[10421], 00:19:57.372 | 70.00th=[11338], 80.00th=[13042], 90.00th=[17171], 95.00th=[19006], 00:19:57.372 | 99.00th=[19530], 99.50th=[20841], 99.90th=[21890], 99.95th=[21890], 00:19:57.372 | 99.99th=[22938] 00:19:57.372 bw ( KiB/s): min=25432, max=27688, per=28.61%, avg=26560.00, stdev=1595.23, samples=2 00:19:57.372 iops : min= 6358, max= 6922, avg=6640.00, stdev=398.81, samples=2 00:19:57.372 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:19:57.372 lat (msec) : 2=0.19%, 4=3.39%, 10=52.51%, 20=42.96%, 50=0.91% 00:19:57.372 cpu : usr=3.59%, sys=6.98%, ctx=743, majf=0, minf=1 00:19:57.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:57.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.372 issued rwts: total=6256,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.372 job3: (groupid=0, jobs=1): err= 0: pid=3557941: Sat Jul 13 21:06:48 2024 00:19:57.372 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:19:57.372 slat (usec): min=2, max=7136, avg=65.80, stdev=390.36 00:19:57.372 clat (usec): min=1992, max=23631, avg=9709.22, stdev=4362.54 00:19:57.372 lat (usec): min=1995, max=23635, avg=9775.02, stdev=4384.28 00:19:57.372 clat percentiles (usec): 00:19:57.372 | 1.00th=[ 3458], 5.00th=[ 4490], 10.00th=[ 5080], 20.00th=[ 5997], 00:19:57.372 | 30.00th=[ 7046], 40.00th=[ 8029], 50.00th=[ 8979], 60.00th=[ 9634], 00:19:57.372 | 70.00th=[10421], 80.00th=[12125], 90.00th=[17433], 95.00th=[19268], 00:19:57.372 | 99.00th=[21365], 99.50th=[23200], 99.90th=[23462], 99.95th=[23725], 00:19:57.372 | 99.99th=[23725] 00:19:57.372 write: IOPS=6791, BW=26.5MiB/s (27.8MB/s)(26.6MiB/1004msec); 0 zone resets 00:19:57.372 slat (usec): min=2, max=8187, avg=64.31, stdev=401.82 00:19:57.372 clat (usec): min=890, max=25342, avg=9198.48, stdev=4258.14 00:19:57.372 lat (usec): min=998, max=25353, avg=9262.79, stdev=4287.05 00:19:57.372 clat percentiles (usec): 00:19:57.372 | 1.00th=[ 2409], 5.00th=[ 3752], 10.00th=[ 4621], 20.00th=[ 5735], 00:19:57.372 | 30.00th=[ 6652], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[ 9110], 00:19:57.372 | 70.00th=[10028], 80.00th=[11338], 90.00th=[16909], 95.00th=[18744], 00:19:57.372 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21890], 99.95th=[23987], 00:19:57.372 | 99.99th=[25297] 00:19:57.372 bw ( KiB/s): min=24864, max=28672, per=28.84%, avg=26768.00, stdev=2692.66, samples=2 00:19:57.372 iops : min= 6216, max= 7168, avg=6692.00, stdev=673.17, samples=2 00:19:57.372 lat (usec) : 1000=0.01% 00:19:57.372 lat (msec) : 2=0.04%, 4=4.20%, 10=63.02%, 20=30.48%, 50=2.25% 00:19:57.372 cpu : usr=3.79%, sys=6.78%, ctx=882, majf=0, minf=1 00:19:57.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:57.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:57.372 issued rwts: total=6656,6819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:57.372 00:19:57.372 Run status group 0 (all jobs): 00:19:57.372 READ: bw=86.1MiB/s (90.3MB/s), 13.9MiB/s-25.9MiB/s (14.6MB/s-27.2MB/s), io=86.4MiB (90.6MB), run=1001-1004msec 00:19:57.372 WRITE: bw=90.7MiB/s (95.1MB/s), 15.4MiB/s-26.5MiB/s (16.2MB/s-27.8MB/s), io=91.0MiB (95.4MB), run=1001-1004msec 00:19:57.372 00:19:57.372 Disk stats (read/write): 00:19:57.372 nvme0n1: ios=3121/3418, merge=0/0, ticks=24385/26966, in_queue=51351, util=84.05% 00:19:57.372 nvme0n2: ios=4096/4335, merge=0/0, ticks=12427/13373, in_queue=25800, util=85.09% 00:19:57.372 nvme0n3: ios=5396/5632, merge=0/0, ticks=35320/34597, in_queue=69917, util=88.02% 00:19:57.372 nvme0n4: ios=6091/6144, merge=0/0, ticks=31233/30837, in_queue=62070, util=88.94% 00:19:57.372 21:06:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:57.372 21:06:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3558046 00:19:57.373 21:06:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:57.373 21:06:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:57.373 [global] 00:19:57.373 thread=1 00:19:57.373 invalidate=1 00:19:57.373 rw=read 00:19:57.373 time_based=1 00:19:57.373 runtime=10 00:19:57.373 ioengine=libaio 00:19:57.373 direct=1 00:19:57.373 bs=4096 00:19:57.373 iodepth=1 00:19:57.373 norandommap=1 00:19:57.373 numjobs=1 00:19:57.373 00:19:57.373 [job0] 00:19:57.373 filename=/dev/nvme0n1 00:19:57.373 [job1] 00:19:57.373 filename=/dev/nvme0n2 00:19:57.373 [job2] 00:19:57.373 filename=/dev/nvme0n3 00:19:57.373 [job3] 00:19:57.373 filename=/dev/nvme0n4 00:19:57.373 Could not set queue depth (nvme0n1) 00:19:57.373 Could not set queue depth (nvme0n2) 00:19:57.373 Could not set queue depth (nvme0n3) 00:19:57.373 Could not set queue depth (nvme0n4) 00:19:57.630 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:57.630 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:57.630 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:57.630 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:57.630 fio-3.35 00:19:57.630 Starting 4 threads 00:20:00.919 21:06:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:00.919 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=89010176, buflen=4096 00:20:00.919 fio: pid=3558362, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:00.919 21:06:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:00.919 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=84131840, buflen=4096 00:20:00.919 fio: pid=3558355, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:00.919 21:06:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:00.919 21:06:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:00.919 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=22220800, buflen=4096 00:20:00.919 fio: pid=3558329, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:00.919 21:06:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:00.919 21:06:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:01.179 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1912832, buflen=4096 00:20:01.179 fio: pid=3558338, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:01.179 21:06:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:01.179 21:06:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:01.179 00:20:01.179 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3558329: Sat Jul 13 21:06:51 2024 00:20:01.179 read: IOPS=7301, BW=28.5MiB/s (29.9MB/s)(85.2MiB/2987msec) 00:20:01.179 slat (usec): min=6, max=16899, avg=12.92, stdev=218.65 00:20:01.179 clat (usec): min=48, max=21054, avg=121.68, stdev=156.52 00:20:01.179 lat (usec): min=58, max=21063, avg=134.60, stdev=268.80 00:20:01.179 clat percentiles (usec): 00:20:01.179 | 1.00th=[ 63], 5.00th=[ 76], 10.00th=[ 79], 20.00th=[ 84], 00:20:01.179 | 30.00th=[ 91], 40.00th=[ 113], 50.00th=[ 126], 60.00th=[ 133], 00:20:01.179 | 70.00th=[ 141], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 169], 00:20:01.179 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 221], 99.95th=[ 225], 00:20:01.179 | 99.99th=[ 578] 00:20:01.179 bw ( KiB/s): min=23816, max=40776, per=23.95%, avg=29278.40, stdev=6771.63, samples=5 00:20:01.179 iops : min= 5954, max=10194, avg=7319.60, stdev=1692.91, samples=5 00:20:01.179 lat (usec) : 50=0.01%, 100=35.63%, 250=64.33%, 500=0.01%, 750=0.01% 00:20:01.179 lat (msec) : 10=0.01%, 50=0.01% 00:20:01.179 cpu : usr=3.55%, sys=10.88%, ctx=21817, majf=0, minf=1 00:20:01.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.179 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.179 issued rwts: total=21810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.179 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3558338: Sat Jul 13 21:06:51 2024 00:20:01.179 read: IOPS=10.4k, BW=40.8MiB/s (42.8MB/s)(130MiB/3184msec) 00:20:01.179 slat (usec): min=7, max=11746, avg=10.30, stdev=110.54 00:20:01.179 clat (usec): min=46, max=32224, avg=84.28, stdev=176.58 00:20:01.179 lat (usec): min=59, max=32233, avg=94.57, stdev=208.35 00:20:01.179 clat percentiles (usec): 00:20:01.179 | 1.00th=[ 58], 5.00th=[ 66], 10.00th=[ 74], 20.00th=[ 79], 00:20:01.179 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:20:01.179 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 93], 95.00th=[ 96], 00:20:01.179 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 157], 00:20:01.179 | 99.99th=[ 334] 00:20:01.179 bw ( KiB/s): min=39489, max=42816, per=34.04%, avg=41613.50, stdev=1179.97, samples=6 00:20:01.179 iops : min= 9872, max=10704, avg=10403.33, stdev=295.08, samples=6 00:20:01.179 lat (usec) : 50=0.01%, 100=97.74%, 250=2.22%, 500=0.02% 00:20:01.179 lat (msec) : 50=0.01% 00:20:01.179 cpu : usr=4.27%, sys=14.70%, ctx=33243, majf=0, minf=1 00:20:01.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.179 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.179 issued rwts: total=33236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.179 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3558355: Sat Jul 13 21:06:51 2024 00:20:01.179 read: IOPS=7330, BW=28.6MiB/s (30.0MB/s)(80.2MiB/2802msec) 00:20:01.179 slat (usec): min=8, max=11817, avg=10.10, stdev=98.98 00:20:01.179 clat (usec): min=64, max=741, avg=124.12, stdev=29.93 00:20:01.179 lat (usec): min=72, max=11944, avg=134.22, stdev=103.44 00:20:01.179 clat percentiles (usec): 00:20:01.179 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 92], 00:20:01.179 | 30.00th=[ 98], 40.00th=[ 113], 50.00th=[ 129], 60.00th=[ 135], 00:20:01.179 | 70.00th=[ 143], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 169], 00:20:01.179 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 217], 99.95th=[ 221], 00:20:01.179 | 99.99th=[ 375] 00:20:01.179 bw ( KiB/s): min=23576, max=38448, per=24.07%, avg=29425.60, stdev=5463.83, samples=5 00:20:01.179 iops : min= 5894, max= 9612, avg=7356.40, stdev=1365.96, samples=5 00:20:01.179 lat (usec) : 100=32.35%, 250=67.63%, 500=0.02%, 750=0.01% 00:20:01.179 cpu : usr=2.96%, sys=10.53%, ctx=20543, majf=0, minf=1 00:20:01.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.179 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.179 issued rwts: total=20541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.179 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3558362: Sat Jul 13 21:06:51 2024 00:20:01.179 read: IOPS=8316, BW=32.5MiB/s (34.1MB/s)(84.9MiB/2613msec) 00:20:01.179 slat (nsec): min=8067, max=42159, avg=8829.76, stdev=860.45 00:20:01.179 clat (usec): min=69, max=372, avg=109.93, stdev=30.68 00:20:01.179 lat (usec): min=81, max=381, avg=118.76, stdev=30.73 00:20:01.179 clat percentiles (usec): 00:20:01.179 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:20:01.179 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 99], 00:20:01.179 | 70.00th=[ 109], 80.00th=[ 151], 90.00th=[ 163], 95.00th=[ 167], 00:20:01.179 | 99.00th=[ 184], 99.50th=[ 200], 99.90th=[ 221], 99.95th=[ 225], 00:20:01.179 | 99.99th=[ 293] 00:20:01.179 bw ( KiB/s): min=23616, max=39832, per=27.13%, avg=33171.20, stdev=8234.84, samples=5 00:20:01.179 iops : min= 5904, max= 9958, avg=8292.80, stdev=2058.71, samples=5 00:20:01.179 lat (usec) : 100=62.11%, 250=37.87%, 500=0.02% 00:20:01.179 cpu : usr=3.25%, sys=12.14%, ctx=21733, majf=0, minf=2 00:20:01.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.180 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.180 issued rwts: total=21732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.180 00:20:01.180 Run status group 0 (all jobs): 00:20:01.180 READ: bw=119MiB/s (125MB/s), 28.5MiB/s-40.8MiB/s (29.9MB/s-42.8MB/s), io=380MiB (399MB), run=2613-3184msec 00:20:01.180 00:20:01.180 Disk stats (read/write): 00:20:01.180 nvme0n1: ios=20299/0, merge=0/0, ticks=2355/0, in_queue=2355, util=92.59% 00:20:01.180 nvme0n2: ios=31999/0, merge=0/0, ticks=2465/0, in_queue=2465, util=94.27% 00:20:01.180 nvme0n3: ios=18949/0, merge=0/0, ticks=2239/0, in_queue=2239, util=96.03% 00:20:01.180 nvme0n4: ios=21533/0, merge=0/0, ticks=2212/0, in_queue=2212, util=96.46% 00:20:01.438 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:01.438 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:01.438 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:01.438 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:01.696 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:01.696 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:01.955 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:01.955 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:02.214 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:20:02.214 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 3558046 00:20:02.214 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:20:02.214 21:06:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:03.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:03.157 nvmf hotplug test: fio failed as expected 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.157 21:06:53 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:03.157 rmmod nvme_rdma 00:20:03.157 rmmod nvme_fabrics 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3555207 ']' 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3555207 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3555207 ']' 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3555207 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:03.157 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3555207 00:20:03.416 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:03.416 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:03.416 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3555207' 00:20:03.416 killing process with pid 3555207 00:20:03.416 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3555207 00:20:03.416 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3555207 00:20:03.675 21:06:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:03.675 21:06:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:03.675 00:20:03.675 real 0m26.417s 00:20:03.675 user 2m6.666s 00:20:03.675 sys 0m10.293s 00:20:03.675 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:03.675 21:06:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.675 ************************************ 00:20:03.675 END TEST nvmf_fio_target 00:20:03.675 ************************************ 00:20:03.675 21:06:54 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:03.675 21:06:54 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:03.675 21:06:54 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:03.675 21:06:54 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:03.675 ************************************ 00:20:03.675 START TEST nvmf_bdevio 00:20:03.675 ************************************ 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:03.675 * Looking for test storage... 00:20:03.675 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:03.675 21:06:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:10.252 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:10.252 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:10.252 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:10.252 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:10.252 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:10.253 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:10.253 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:10.253 altname enp217s0f0np0 00:20:10.253 altname ens818f0np0 00:20:10.253 inet 192.168.100.8/24 scope global mlx_0_0 00:20:10.253 valid_lft forever preferred_lft forever 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:10.253 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:10.253 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:10.253 altname enp217s0f1np1 00:20:10.253 altname ens818f1np1 00:20:10.253 inet 192.168.100.9/24 scope global mlx_0_1 00:20:10.253 valid_lft forever preferred_lft forever 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:10.253 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:10.254 192.168.100.9' 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:10.254 192.168.100.9' 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:10.254 192.168.100.9' 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:10.254 21:07:00 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.254 21:07:01 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3562457 00:20:10.254 21:07:01 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:10.254 21:07:01 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3562457 00:20:10.254 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3562457 ']' 00:20:10.254 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.254 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:10.254 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.254 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:10.254 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.254 [2024-07-13 21:07:01.052168] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:10.254 [2024-07-13 21:07:01.052226] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.254 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.254 [2024-07-13 21:07:01.126987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.517 [2024-07-13 21:07:01.167188] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.517 [2024-07-13 21:07:01.167230] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.517 [2024-07-13 21:07:01.167239] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.517 [2024-07-13 21:07:01.167248] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.517 [2024-07-13 21:07:01.167255] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.517 [2024-07-13 21:07:01.167375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:10.517 [2024-07-13 21:07:01.167497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:10.517 [2024-07-13 21:07:01.167605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.517 [2024-07-13 21:07:01.167606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:11.084 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:11.084 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:20:11.084 21:07:01 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.084 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.084 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:11.084 21:07:01 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.084 21:07:01 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:11.084 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.084 21:07:01 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:11.084 [2024-07-13 21:07:01.936871] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6b8560/0x6bca50) succeed. 00:20:11.084 [2024-07-13 21:07:01.947129] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6b9ba0/0x6fe0e0) succeed. 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:11.344 Malloc0 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:11.344 [2024-07-13 21:07:02.113147] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.344 { 00:20:11.344 "params": { 00:20:11.344 "name": "Nvme$subsystem", 00:20:11.344 "trtype": "$TEST_TRANSPORT", 00:20:11.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.344 "adrfam": "ipv4", 00:20:11.344 "trsvcid": "$NVMF_PORT", 00:20:11.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.344 "hdgst": ${hdgst:-false}, 00:20:11.344 "ddgst": ${ddgst:-false} 00:20:11.344 }, 00:20:11.344 "method": "bdev_nvme_attach_controller" 00:20:11.344 } 00:20:11.344 EOF 00:20:11.344 )") 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:11.344 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:11.344 "params": { 00:20:11.344 "name": "Nvme1", 00:20:11.344 "trtype": "rdma", 00:20:11.344 "traddr": "192.168.100.8", 00:20:11.344 "adrfam": "ipv4", 00:20:11.344 "trsvcid": "4420", 00:20:11.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.344 "hdgst": false, 00:20:11.344 "ddgst": false 00:20:11.344 }, 00:20:11.344 "method": "bdev_nvme_attach_controller" 00:20:11.344 }' 00:20:11.344 [2024-07-13 21:07:02.164958] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:11.344 [2024-07-13 21:07:02.165019] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3562734 ] 00:20:11.344 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.603 [2024-07-13 21:07:02.236490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:11.603 [2024-07-13 21:07:02.277318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.603 [2024-07-13 21:07:02.277414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.603 [2024-07-13 21:07:02.277416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.603 I/O targets: 00:20:11.603 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:11.603 00:20:11.603 00:20:11.603 CUnit - A unit testing framework for C - Version 2.1-3 00:20:11.603 http://cunit.sourceforge.net/ 00:20:11.603 00:20:11.603 00:20:11.603 Suite: bdevio tests on: Nvme1n1 00:20:11.603 Test: blockdev write read block ...passed 00:20:11.603 Test: blockdev write zeroes read block ...passed 00:20:11.603 Test: blockdev write zeroes read no split ...passed 00:20:11.603 Test: blockdev write zeroes read split ...passed 00:20:11.603 Test: blockdev write zeroes read split partial ...passed 00:20:11.603 Test: blockdev reset ...[2024-07-13 21:07:02.478451] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:11.954 [2024-07-13 21:07:02.501302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:11.954 [2024-07-13 21:07:02.527886] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:11.954 passed 00:20:11.954 Test: blockdev write read 8 blocks ...passed 00:20:11.954 Test: blockdev write read size > 128k ...passed 00:20:11.954 Test: blockdev write read invalid size ...passed 00:20:11.954 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:11.954 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:11.954 Test: blockdev write read max offset ...passed 00:20:11.954 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:11.954 Test: blockdev writev readv 8 blocks ...passed 00:20:11.954 Test: blockdev writev readv 30 x 1block ...passed 00:20:11.954 Test: blockdev writev readv block ...passed 00:20:11.954 Test: blockdev writev readv size > 128k ...passed 00:20:11.954 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:11.954 Test: blockdev comparev and writev ...[2024-07-13 21:07:02.530875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.954 [2024-07-13 21:07:02.530903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.954 [2024-07-13 21:07:02.530916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.954 [2024-07-13 21:07:02.530925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.954 [2024-07-13 21:07:02.531079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.954 [2024-07-13 21:07:02.531090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:11.954 [2024-07-13 21:07:02.531100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.954 [2024-07-13 21:07:02.531109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:11.954 [2024-07-13 21:07:02.531261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.954 [2024-07-13 21:07:02.531272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:11.955 [2024-07-13 21:07:02.531282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.955 [2024-07-13 21:07:02.531291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:11.955 [2024-07-13 21:07:02.531457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.955 [2024-07-13 21:07:02.531467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:11.955 [2024-07-13 21:07:02.531478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.955 [2024-07-13 21:07:02.531487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:11.955 passed 00:20:11.955 Test: blockdev nvme passthru rw ...passed 00:20:11.955 Test: blockdev nvme passthru vendor specific ...[2024-07-13 21:07:02.531757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:11.955 [2024-07-13 21:07:02.531769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:11.955 [2024-07-13 21:07:02.531809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:11.955 [2024-07-13 21:07:02.531819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:11.955 [2024-07-13 21:07:02.531861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:11.955 [2024-07-13 21:07:02.531870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:11.955 [2024-07-13 21:07:02.531920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:11.955 [2024-07-13 21:07:02.531930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:11.955 passed 00:20:11.955 Test: blockdev nvme admin passthru ...passed 00:20:11.955 Test: blockdev copy ...passed 00:20:11.955 00:20:11.955 Run Summary: Type Total Ran Passed Failed Inactive 00:20:11.955 suites 1 1 n/a 0 0 00:20:11.955 tests 23 23 23 0 0 00:20:11.955 asserts 152 152 152 0 n/a 00:20:11.955 00:20:11.955 Elapsed time = 0.173 seconds 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:11.955 rmmod nvme_rdma 00:20:11.955 rmmod nvme_fabrics 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3562457 ']' 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3562457 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3562457 ']' 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3562457 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:11.955 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3562457 00:20:12.234 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:12.234 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:12.234 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3562457' 00:20:12.234 killing process with pid 3562457 00:20:12.234 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3562457 00:20:12.234 21:07:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3562457 00:20:12.234 21:07:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.234 21:07:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:12.234 00:20:12.234 real 0m8.686s 00:20:12.234 user 0m10.466s 00:20:12.234 sys 0m5.549s 00:20:12.234 21:07:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:12.234 21:07:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:12.234 ************************************ 00:20:12.234 END TEST nvmf_bdevio 00:20:12.234 ************************************ 00:20:12.494 21:07:03 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:20:12.494 21:07:03 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:12.494 21:07:03 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:12.494 21:07:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:12.494 ************************************ 00:20:12.494 START TEST nvmf_auth_target 00:20:12.494 ************************************ 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:20:12.494 * Looking for test storage... 00:20:12.494 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.494 21:07:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:19.065 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:19.065 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:19.065 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:19.065 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:19.065 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:19.066 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:19.066 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:19.066 altname enp217s0f0np0 00:20:19.066 altname ens818f0np0 00:20:19.066 inet 192.168.100.8/24 scope global mlx_0_0 00:20:19.066 valid_lft forever preferred_lft forever 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:19.066 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:19.066 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:19.066 altname enp217s0f1np1 00:20:19.066 altname ens818f1np1 00:20:19.066 inet 192.168.100.9/24 scope global mlx_0_1 00:20:19.066 valid_lft forever preferred_lft forever 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:19.066 192.168.100.9' 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:19.066 192.168.100.9' 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:19.066 192.168.100.9' 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:19.066 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3566165 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3566165 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3566165 ']' 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:19.067 21:07:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3566184 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3cc85491f44083386ad53fa13772b09b625fe6c601e9f8df 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.U9B 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3cc85491f44083386ad53fa13772b09b625fe6c601e9f8df 0 00:20:19.326 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3cc85491f44083386ad53fa13772b09b625fe6c601e9f8df 0 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3cc85491f44083386ad53fa13772b09b625fe6c601e9f8df 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.U9B 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.U9B 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.U9B 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1f111d6efdef4d184cfe6d8492352ffb848aa50f17499b7d1d480db27d46c216 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Krv 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1f111d6efdef4d184cfe6d8492352ffb848aa50f17499b7d1d480db27d46c216 3 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1f111d6efdef4d184cfe6d8492352ffb848aa50f17499b7d1d480db27d46c216 3 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1f111d6efdef4d184cfe6d8492352ffb848aa50f17499b7d1d480db27d46c216 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Krv 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Krv 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Krv 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6ab45db3cb929bea2c20dce2dd84f174 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ehf 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6ab45db3cb929bea2c20dce2dd84f174 1 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6ab45db3cb929bea2c20dce2dd84f174 1 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6ab45db3cb929bea2c20dce2dd84f174 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ehf 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ehf 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.ehf 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3f12049ea02e10ff57c8eacd43c69f92336b58d7dbd73ed3 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.M7R 00:20:19.586 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3f12049ea02e10ff57c8eacd43c69f92336b58d7dbd73ed3 2 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3f12049ea02e10ff57c8eacd43c69f92336b58d7dbd73ed3 2 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3f12049ea02e10ff57c8eacd43c69f92336b58d7dbd73ed3 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.M7R 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.M7R 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.M7R 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ba8bf6797db219aa12330879485e12d14b30e9f2497ec4a2 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CcR 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ba8bf6797db219aa12330879485e12d14b30e9f2497ec4a2 2 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ba8bf6797db219aa12330879485e12d14b30e9f2497ec4a2 2 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ba8bf6797db219aa12330879485e12d14b30e9f2497ec4a2 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:19.587 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CcR 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CcR 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.CcR 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ae0d865053ca38357521c3f184d591e0 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yGT 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ae0d865053ca38357521c3f184d591e0 1 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ae0d865053ca38357521c3f184d591e0 1 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ae0d865053ca38357521c3f184d591e0 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yGT 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yGT 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.yGT 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7f8609455765cc8e88c9c01b0360c7bf8391d856fb9e53c1dc87e910c4d760aa 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Phh 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7f8609455765cc8e88c9c01b0360c7bf8391d856fb9e53c1dc87e910c4d760aa 3 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7f8609455765cc8e88c9c01b0360c7bf8391d856fb9e53c1dc87e910c4d760aa 3 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7f8609455765cc8e88c9c01b0360c7bf8391d856fb9e53c1dc87e910c4d760aa 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Phh 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Phh 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Phh 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3566165 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3566165 ']' 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:19.847 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:20.106 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:20.106 21:07:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3566184 /var/tmp/host.sock 00:20:20.106 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3566184 ']' 00:20:20.106 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:20:20.106 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:20.106 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:20.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:20.106 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:20.106 21:07:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.U9B 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.U9B 00:20:20.366 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.U9B 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Krv ]] 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Krv 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Krv 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Krv 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ehf 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ehf 00:20:20.626 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ehf 00:20:20.886 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.M7R ]] 00:20:20.886 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M7R 00:20:20.886 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.886 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.886 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.886 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M7R 00:20:20.886 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.M7R 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.CcR 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.CcR 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.CcR 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.yGT ]] 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yGT 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yGT 00:20:21.146 21:07:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yGT 00:20:21.405 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:21.405 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Phh 00:20:21.405 21:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.405 21:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.405 21:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.405 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Phh 00:20:21.405 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Phh 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.665 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.924 00:20:21.924 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.924 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.924 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.184 { 00:20:22.184 "cntlid": 1, 00:20:22.184 "qid": 0, 00:20:22.184 "state": "enabled", 00:20:22.184 "listen_address": { 00:20:22.184 "trtype": "RDMA", 00:20:22.184 "adrfam": "IPv4", 00:20:22.184 "traddr": "192.168.100.8", 00:20:22.184 "trsvcid": "4420" 00:20:22.184 }, 00:20:22.184 "peer_address": { 00:20:22.184 "trtype": "RDMA", 00:20:22.184 "adrfam": "IPv4", 00:20:22.184 "traddr": "192.168.100.8", 00:20:22.184 "trsvcid": "51782" 00:20:22.184 }, 00:20:22.184 "auth": { 00:20:22.184 "state": "completed", 00:20:22.184 "digest": "sha256", 00:20:22.184 "dhgroup": "null" 00:20:22.184 } 00:20:22.184 } 00:20:22.184 ]' 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:22.184 21:07:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.184 21:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.184 21:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.184 21:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.453 21:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:20:23.019 21:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.277 21:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:23.278 21:07:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.278 21:07:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.278 21:07:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.278 21:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.278 21:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:23.278 21:07:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.278 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.536 00:20:23.536 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.536 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.536 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.795 { 00:20:23.795 "cntlid": 3, 00:20:23.795 "qid": 0, 00:20:23.795 "state": "enabled", 00:20:23.795 "listen_address": { 00:20:23.795 "trtype": "RDMA", 00:20:23.795 "adrfam": "IPv4", 00:20:23.795 "traddr": "192.168.100.8", 00:20:23.795 "trsvcid": "4420" 00:20:23.795 }, 00:20:23.795 "peer_address": { 00:20:23.795 "trtype": "RDMA", 00:20:23.795 "adrfam": "IPv4", 00:20:23.795 "traddr": "192.168.100.8", 00:20:23.795 "trsvcid": "41405" 00:20:23.795 }, 00:20:23.795 "auth": { 00:20:23.795 "state": "completed", 00:20:23.795 "digest": "sha256", 00:20:23.795 "dhgroup": "null" 00:20:23.795 } 00:20:23.795 } 00:20:23.795 ]' 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.795 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.053 21:07:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:20:24.621 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.881 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.140 00:20:25.140 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.140 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.140 21:07:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.399 { 00:20:25.399 "cntlid": 5, 00:20:25.399 "qid": 0, 00:20:25.399 "state": "enabled", 00:20:25.399 "listen_address": { 00:20:25.399 "trtype": "RDMA", 00:20:25.399 "adrfam": "IPv4", 00:20:25.399 "traddr": "192.168.100.8", 00:20:25.399 "trsvcid": "4420" 00:20:25.399 }, 00:20:25.399 "peer_address": { 00:20:25.399 "trtype": "RDMA", 00:20:25.399 "adrfam": "IPv4", 00:20:25.399 "traddr": "192.168.100.8", 00:20:25.399 "trsvcid": "41745" 00:20:25.399 }, 00:20:25.399 "auth": { 00:20:25.399 "state": "completed", 00:20:25.399 "digest": "sha256", 00:20:25.399 "dhgroup": "null" 00:20:25.399 } 00:20:25.399 } 00:20:25.399 ]' 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.399 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.657 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.657 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.657 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.657 21:07:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:20:26.225 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.484 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:26.484 21:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.484 21:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.484 21:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.484 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.484 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:26.484 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.743 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.743 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.002 { 00:20:27.002 "cntlid": 7, 00:20:27.002 "qid": 0, 00:20:27.002 "state": "enabled", 00:20:27.002 "listen_address": { 00:20:27.002 "trtype": "RDMA", 00:20:27.002 "adrfam": "IPv4", 00:20:27.002 "traddr": "192.168.100.8", 00:20:27.002 "trsvcid": "4420" 00:20:27.002 }, 00:20:27.002 "peer_address": { 00:20:27.002 "trtype": "RDMA", 00:20:27.002 "adrfam": "IPv4", 00:20:27.002 "traddr": "192.168.100.8", 00:20:27.002 "trsvcid": "43160" 00:20:27.002 }, 00:20:27.002 "auth": { 00:20:27.002 "state": "completed", 00:20:27.002 "digest": "sha256", 00:20:27.002 "dhgroup": "null" 00:20:27.002 } 00:20:27.002 } 00:20:27.002 ]' 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.002 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.262 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:27.262 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.262 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.262 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.262 21:07:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.262 21:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:20:28.198 21:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.198 21:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:28.198 21:07:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.198 21:07:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.198 21:07:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.198 21:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.198 21:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.198 21:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:28.198 21:07:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.198 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.456 00:20:28.456 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.456 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.456 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.713 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.713 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.713 21:07:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.713 21:07:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.713 21:07:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.713 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.713 { 00:20:28.713 "cntlid": 9, 00:20:28.713 "qid": 0, 00:20:28.713 "state": "enabled", 00:20:28.713 "listen_address": { 00:20:28.713 "trtype": "RDMA", 00:20:28.713 "adrfam": "IPv4", 00:20:28.713 "traddr": "192.168.100.8", 00:20:28.713 "trsvcid": "4420" 00:20:28.714 }, 00:20:28.714 "peer_address": { 00:20:28.714 "trtype": "RDMA", 00:20:28.714 "adrfam": "IPv4", 00:20:28.714 "traddr": "192.168.100.8", 00:20:28.714 "trsvcid": "57320" 00:20:28.714 }, 00:20:28.714 "auth": { 00:20:28.714 "state": "completed", 00:20:28.714 "digest": "sha256", 00:20:28.714 "dhgroup": "ffdhe2048" 00:20:28.714 } 00:20:28.714 } 00:20:28.714 ]' 00:20:28.714 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.714 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.714 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.714 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.714 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.714 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.714 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.714 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.981 21:07:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:20:29.547 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.805 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.064 00:20:30.064 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.064 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.064 21:07:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.322 { 00:20:30.322 "cntlid": 11, 00:20:30.322 "qid": 0, 00:20:30.322 "state": "enabled", 00:20:30.322 "listen_address": { 00:20:30.322 "trtype": "RDMA", 00:20:30.322 "adrfam": "IPv4", 00:20:30.322 "traddr": "192.168.100.8", 00:20:30.322 "trsvcid": "4420" 00:20:30.322 }, 00:20:30.322 "peer_address": { 00:20:30.322 "trtype": "RDMA", 00:20:30.322 "adrfam": "IPv4", 00:20:30.322 "traddr": "192.168.100.8", 00:20:30.322 "trsvcid": "42137" 00:20:30.322 }, 00:20:30.322 "auth": { 00:20:30.322 "state": "completed", 00:20:30.322 "digest": "sha256", 00:20:30.322 "dhgroup": "ffdhe2048" 00:20:30.322 } 00:20:30.322 } 00:20:30.322 ]' 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.322 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.588 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:20:31.160 21:07:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.419 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.679 00:20:31.679 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.679 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.679 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.938 { 00:20:31.938 "cntlid": 13, 00:20:31.938 "qid": 0, 00:20:31.938 "state": "enabled", 00:20:31.938 "listen_address": { 00:20:31.938 "trtype": "RDMA", 00:20:31.938 "adrfam": "IPv4", 00:20:31.938 "traddr": "192.168.100.8", 00:20:31.938 "trsvcid": "4420" 00:20:31.938 }, 00:20:31.938 "peer_address": { 00:20:31.938 "trtype": "RDMA", 00:20:31.938 "adrfam": "IPv4", 00:20:31.938 "traddr": "192.168.100.8", 00:20:31.938 "trsvcid": "33439" 00:20:31.938 }, 00:20:31.938 "auth": { 00:20:31.938 "state": "completed", 00:20:31.938 "digest": "sha256", 00:20:31.938 "dhgroup": "ffdhe2048" 00:20:31.938 } 00:20:31.938 } 00:20:31.938 ]' 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.938 21:07:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.197 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:20:32.764 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.022 21:07:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.280 00:20:33.280 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.280 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.280 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.539 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.539 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.539 21:07:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.539 21:07:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.539 21:07:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.539 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.539 { 00:20:33.539 "cntlid": 15, 00:20:33.539 "qid": 0, 00:20:33.539 "state": "enabled", 00:20:33.539 "listen_address": { 00:20:33.539 "trtype": "RDMA", 00:20:33.539 "adrfam": "IPv4", 00:20:33.539 "traddr": "192.168.100.8", 00:20:33.539 "trsvcid": "4420" 00:20:33.540 }, 00:20:33.540 "peer_address": { 00:20:33.540 "trtype": "RDMA", 00:20:33.540 "adrfam": "IPv4", 00:20:33.540 "traddr": "192.168.100.8", 00:20:33.540 "trsvcid": "51641" 00:20:33.540 }, 00:20:33.540 "auth": { 00:20:33.540 "state": "completed", 00:20:33.540 "digest": "sha256", 00:20:33.540 "dhgroup": "ffdhe2048" 00:20:33.540 } 00:20:33.540 } 00:20:33.540 ]' 00:20:33.540 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.540 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.540 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.540 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.540 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.799 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.799 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.799 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.799 21:07:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:20:34.367 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.626 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.886 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.886 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.144 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.144 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.144 21:07:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.144 21:07:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.144 21:07:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.144 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.144 { 00:20:35.144 "cntlid": 17, 00:20:35.144 "qid": 0, 00:20:35.144 "state": "enabled", 00:20:35.144 "listen_address": { 00:20:35.144 "trtype": "RDMA", 00:20:35.144 "adrfam": "IPv4", 00:20:35.144 "traddr": "192.168.100.8", 00:20:35.144 "trsvcid": "4420" 00:20:35.144 }, 00:20:35.144 "peer_address": { 00:20:35.144 "trtype": "RDMA", 00:20:35.144 "adrfam": "IPv4", 00:20:35.144 "traddr": "192.168.100.8", 00:20:35.144 "trsvcid": "48816" 00:20:35.144 }, 00:20:35.144 "auth": { 00:20:35.144 "state": "completed", 00:20:35.144 "digest": "sha256", 00:20:35.144 "dhgroup": "ffdhe3072" 00:20:35.144 } 00:20:35.144 } 00:20:35.144 ]' 00:20:35.144 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.144 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.144 21:07:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.144 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.144 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.403 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.403 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.403 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.403 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:20:35.970 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.229 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:36.229 21:07:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.229 21:07:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.229 21:07:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.229 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.229 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.229 21:07:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.488 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.488 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.747 { 00:20:36.747 "cntlid": 19, 00:20:36.747 "qid": 0, 00:20:36.747 "state": "enabled", 00:20:36.747 "listen_address": { 00:20:36.747 "trtype": "RDMA", 00:20:36.747 "adrfam": "IPv4", 00:20:36.747 "traddr": "192.168.100.8", 00:20:36.747 "trsvcid": "4420" 00:20:36.747 }, 00:20:36.747 "peer_address": { 00:20:36.747 "trtype": "RDMA", 00:20:36.747 "adrfam": "IPv4", 00:20:36.747 "traddr": "192.168.100.8", 00:20:36.747 "trsvcid": "40472" 00:20:36.747 }, 00:20:36.747 "auth": { 00:20:36.747 "state": "completed", 00:20:36.747 "digest": "sha256", 00:20:36.747 "dhgroup": "ffdhe3072" 00:20:36.747 } 00:20:36.747 } 00:20:36.747 ]' 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.747 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.005 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.005 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.005 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.006 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.006 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.006 21:07:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:20:37.650 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.910 21:07:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.169 00:20:38.169 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.169 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.169 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.428 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.428 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.428 21:07:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.428 21:07:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.428 21:07:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.428 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.428 { 00:20:38.428 "cntlid": 21, 00:20:38.428 "qid": 0, 00:20:38.428 "state": "enabled", 00:20:38.429 "listen_address": { 00:20:38.429 "trtype": "RDMA", 00:20:38.429 "adrfam": "IPv4", 00:20:38.429 "traddr": "192.168.100.8", 00:20:38.429 "trsvcid": "4420" 00:20:38.429 }, 00:20:38.429 "peer_address": { 00:20:38.429 "trtype": "RDMA", 00:20:38.429 "adrfam": "IPv4", 00:20:38.429 "traddr": "192.168.100.8", 00:20:38.429 "trsvcid": "39205" 00:20:38.429 }, 00:20:38.429 "auth": { 00:20:38.429 "state": "completed", 00:20:38.429 "digest": "sha256", 00:20:38.429 "dhgroup": "ffdhe3072" 00:20:38.429 } 00:20:38.429 } 00:20:38.429 ]' 00:20:38.429 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.429 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.429 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.688 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.688 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.688 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.688 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.688 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.688 21:07:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:20:39.257 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.516 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:39.516 21:07:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.516 21:07:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.516 21:07:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.516 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.516 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:39.516 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.776 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.036 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.036 { 00:20:40.036 "cntlid": 23, 00:20:40.036 "qid": 0, 00:20:40.036 "state": "enabled", 00:20:40.036 "listen_address": { 00:20:40.036 "trtype": "RDMA", 00:20:40.036 "adrfam": "IPv4", 00:20:40.036 "traddr": "192.168.100.8", 00:20:40.036 "trsvcid": "4420" 00:20:40.036 }, 00:20:40.036 "peer_address": { 00:20:40.036 "trtype": "RDMA", 00:20:40.036 "adrfam": "IPv4", 00:20:40.036 "traddr": "192.168.100.8", 00:20:40.036 "trsvcid": "38551" 00:20:40.036 }, 00:20:40.036 "auth": { 00:20:40.036 "state": "completed", 00:20:40.036 "digest": "sha256", 00:20:40.036 "dhgroup": "ffdhe3072" 00:20:40.036 } 00:20:40.036 } 00:20:40.036 ]' 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.036 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.296 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.296 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.296 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.296 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.296 21:07:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.296 21:07:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:20:41.235 21:07:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.235 21:07:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:41.235 21:07:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.235 21:07:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.235 21:07:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.235 21:07:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.235 21:07:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.235 21:07:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:41.235 21:07:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.236 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.495 00:20:41.495 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.495 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.495 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.754 { 00:20:41.754 "cntlid": 25, 00:20:41.754 "qid": 0, 00:20:41.754 "state": "enabled", 00:20:41.754 "listen_address": { 00:20:41.754 "trtype": "RDMA", 00:20:41.754 "adrfam": "IPv4", 00:20:41.754 "traddr": "192.168.100.8", 00:20:41.754 "trsvcid": "4420" 00:20:41.754 }, 00:20:41.754 "peer_address": { 00:20:41.754 "trtype": "RDMA", 00:20:41.754 "adrfam": "IPv4", 00:20:41.754 "traddr": "192.168.100.8", 00:20:41.754 "trsvcid": "53768" 00:20:41.754 }, 00:20:41.754 "auth": { 00:20:41.754 "state": "completed", 00:20:41.754 "digest": "sha256", 00:20:41.754 "dhgroup": "ffdhe4096" 00:20:41.754 } 00:20:41.754 } 00:20:41.754 ]' 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.754 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.014 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.014 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.014 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.014 21:07:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:20:42.581 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.839 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.839 21:07:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.839 21:07:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.839 21:07:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.839 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.839 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.839 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.098 21:07:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.357 00:20:43.357 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.357 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.357 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.357 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.357 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.357 21:07:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.357 21:07:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.617 { 00:20:43.617 "cntlid": 27, 00:20:43.617 "qid": 0, 00:20:43.617 "state": "enabled", 00:20:43.617 "listen_address": { 00:20:43.617 "trtype": "RDMA", 00:20:43.617 "adrfam": "IPv4", 00:20:43.617 "traddr": "192.168.100.8", 00:20:43.617 "trsvcid": "4420" 00:20:43.617 }, 00:20:43.617 "peer_address": { 00:20:43.617 "trtype": "RDMA", 00:20:43.617 "adrfam": "IPv4", 00:20:43.617 "traddr": "192.168.100.8", 00:20:43.617 "trsvcid": "41697" 00:20:43.617 }, 00:20:43.617 "auth": { 00:20:43.617 "state": "completed", 00:20:43.617 "digest": "sha256", 00:20:43.617 "dhgroup": "ffdhe4096" 00:20:43.617 } 00:20:43.617 } 00:20:43.617 ]' 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.617 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.875 21:07:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:20:44.442 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.442 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:44.442 21:07:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.442 21:07:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.442 21:07:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.442 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.442 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.442 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.702 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:44.702 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.702 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:44.702 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:44.702 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.703 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.703 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.703 21:07:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.703 21:07:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.703 21:07:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.703 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.703 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.962 00:20:44.962 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.962 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.962 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.221 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.221 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.221 21:07:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.221 21:07:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.222 21:07:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.222 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.222 { 00:20:45.222 "cntlid": 29, 00:20:45.222 "qid": 0, 00:20:45.222 "state": "enabled", 00:20:45.222 "listen_address": { 00:20:45.222 "trtype": "RDMA", 00:20:45.222 "adrfam": "IPv4", 00:20:45.222 "traddr": "192.168.100.8", 00:20:45.222 "trsvcid": "4420" 00:20:45.222 }, 00:20:45.222 "peer_address": { 00:20:45.222 "trtype": "RDMA", 00:20:45.222 "adrfam": "IPv4", 00:20:45.222 "traddr": "192.168.100.8", 00:20:45.222 "trsvcid": "52324" 00:20:45.222 }, 00:20:45.222 "auth": { 00:20:45.222 "state": "completed", 00:20:45.222 "digest": "sha256", 00:20:45.222 "dhgroup": "ffdhe4096" 00:20:45.222 } 00:20:45.222 } 00:20:45.222 ]' 00:20:45.222 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.222 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.222 21:07:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.222 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.222 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.222 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.222 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.222 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.481 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:20:46.049 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.309 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:46.309 21:07:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.309 21:07:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.309 21:07:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.309 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.309 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:46.309 21:07:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.309 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.567 00:20:46.567 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.567 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.567 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.826 { 00:20:46.826 "cntlid": 31, 00:20:46.826 "qid": 0, 00:20:46.826 "state": "enabled", 00:20:46.826 "listen_address": { 00:20:46.826 "trtype": "RDMA", 00:20:46.826 "adrfam": "IPv4", 00:20:46.826 "traddr": "192.168.100.8", 00:20:46.826 "trsvcid": "4420" 00:20:46.826 }, 00:20:46.826 "peer_address": { 00:20:46.826 "trtype": "RDMA", 00:20:46.826 "adrfam": "IPv4", 00:20:46.826 "traddr": "192.168.100.8", 00:20:46.826 "trsvcid": "45921" 00:20:46.826 }, 00:20:46.826 "auth": { 00:20:46.826 "state": "completed", 00:20:46.826 "digest": "sha256", 00:20:46.826 "dhgroup": "ffdhe4096" 00:20:46.826 } 00:20:46.826 } 00:20:46.826 ]' 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.826 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.085 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.085 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.085 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.085 21:07:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:20:47.653 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:47.911 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.170 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:48.170 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.170 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.170 21:07:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.170 21:07:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.170 21:07:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.170 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.170 21:07:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.429 00:20:48.429 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.429 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.429 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.429 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.429 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.429 21:07:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.429 21:07:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.689 { 00:20:48.689 "cntlid": 33, 00:20:48.689 "qid": 0, 00:20:48.689 "state": "enabled", 00:20:48.689 "listen_address": { 00:20:48.689 "trtype": "RDMA", 00:20:48.689 "adrfam": "IPv4", 00:20:48.689 "traddr": "192.168.100.8", 00:20:48.689 "trsvcid": "4420" 00:20:48.689 }, 00:20:48.689 "peer_address": { 00:20:48.689 "trtype": "RDMA", 00:20:48.689 "adrfam": "IPv4", 00:20:48.689 "traddr": "192.168.100.8", 00:20:48.689 "trsvcid": "35100" 00:20:48.689 }, 00:20:48.689 "auth": { 00:20:48.689 "state": "completed", 00:20:48.689 "digest": "sha256", 00:20:48.689 "dhgroup": "ffdhe6144" 00:20:48.689 } 00:20:48.689 } 00:20:48.689 ]' 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.689 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.948 21:07:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:20:49.515 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.515 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:49.515 21:07:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.515 21:07:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.515 21:07:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.515 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.515 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.515 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.774 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.032 00:20:50.033 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.033 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.033 21:07:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.292 { 00:20:50.292 "cntlid": 35, 00:20:50.292 "qid": 0, 00:20:50.292 "state": "enabled", 00:20:50.292 "listen_address": { 00:20:50.292 "trtype": "RDMA", 00:20:50.292 "adrfam": "IPv4", 00:20:50.292 "traddr": "192.168.100.8", 00:20:50.292 "trsvcid": "4420" 00:20:50.292 }, 00:20:50.292 "peer_address": { 00:20:50.292 "trtype": "RDMA", 00:20:50.292 "adrfam": "IPv4", 00:20:50.292 "traddr": "192.168.100.8", 00:20:50.292 "trsvcid": "53200" 00:20:50.292 }, 00:20:50.292 "auth": { 00:20:50.292 "state": "completed", 00:20:50.292 "digest": "sha256", 00:20:50.292 "dhgroup": "ffdhe6144" 00:20:50.292 } 00:20:50.292 } 00:20:50.292 ]' 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.292 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.550 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.551 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.551 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.551 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:20:51.117 21:07:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.376 21:07:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.635 21:07:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.635 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.635 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.894 00:20:51.894 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.894 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.894 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.894 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.894 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.894 21:07:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.894 21:07:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.153 { 00:20:52.153 "cntlid": 37, 00:20:52.153 "qid": 0, 00:20:52.153 "state": "enabled", 00:20:52.153 "listen_address": { 00:20:52.153 "trtype": "RDMA", 00:20:52.153 "adrfam": "IPv4", 00:20:52.153 "traddr": "192.168.100.8", 00:20:52.153 "trsvcid": "4420" 00:20:52.153 }, 00:20:52.153 "peer_address": { 00:20:52.153 "trtype": "RDMA", 00:20:52.153 "adrfam": "IPv4", 00:20:52.153 "traddr": "192.168.100.8", 00:20:52.153 "trsvcid": "53860" 00:20:52.153 }, 00:20:52.153 "auth": { 00:20:52.153 "state": "completed", 00:20:52.153 "digest": "sha256", 00:20:52.153 "dhgroup": "ffdhe6144" 00:20:52.153 } 00:20:52.153 } 00:20:52.153 ]' 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.153 21:07:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.415 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:20:52.983 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.983 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:52.983 21:07:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.983 21:07:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.983 21:07:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.983 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.983 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.983 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.243 21:07:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.502 00:20:53.502 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.502 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.502 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.762 { 00:20:53.762 "cntlid": 39, 00:20:53.762 "qid": 0, 00:20:53.762 "state": "enabled", 00:20:53.762 "listen_address": { 00:20:53.762 "trtype": "RDMA", 00:20:53.762 "adrfam": "IPv4", 00:20:53.762 "traddr": "192.168.100.8", 00:20:53.762 "trsvcid": "4420" 00:20:53.762 }, 00:20:53.762 "peer_address": { 00:20:53.762 "trtype": "RDMA", 00:20:53.762 "adrfam": "IPv4", 00:20:53.762 "traddr": "192.168.100.8", 00:20:53.762 "trsvcid": "57792" 00:20:53.762 }, 00:20:53.762 "auth": { 00:20:53.762 "state": "completed", 00:20:53.762 "digest": "sha256", 00:20:53.762 "dhgroup": "ffdhe6144" 00:20:53.762 } 00:20:53.762 } 00:20:53.762 ]' 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.762 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.022 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.022 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.022 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.022 21:07:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:20:54.591 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.850 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:54.850 21:07:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.850 21:07:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.850 21:07:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.850 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.850 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.850 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.850 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.110 21:07:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.369 00:20:55.369 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.369 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.369 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.628 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.628 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.628 21:07:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.628 21:07:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.628 21:07:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.628 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.628 { 00:20:55.628 "cntlid": 41, 00:20:55.628 "qid": 0, 00:20:55.628 "state": "enabled", 00:20:55.628 "listen_address": { 00:20:55.628 "trtype": "RDMA", 00:20:55.628 "adrfam": "IPv4", 00:20:55.628 "traddr": "192.168.100.8", 00:20:55.628 "trsvcid": "4420" 00:20:55.628 }, 00:20:55.628 "peer_address": { 00:20:55.628 "trtype": "RDMA", 00:20:55.628 "adrfam": "IPv4", 00:20:55.628 "traddr": "192.168.100.8", 00:20:55.628 "trsvcid": "54677" 00:20:55.628 }, 00:20:55.628 "auth": { 00:20:55.628 "state": "completed", 00:20:55.628 "digest": "sha256", 00:20:55.628 "dhgroup": "ffdhe8192" 00:20:55.628 } 00:20:55.628 } 00:20:55.628 ]' 00:20:55.628 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.628 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.628 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.888 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.888 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.888 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.888 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.888 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.888 21:07:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:20:56.457 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.717 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:56.717 21:07:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.717 21:07:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.717 21:07:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.717 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.717 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.717 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.976 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:56.976 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.976 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:56.976 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.976 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.976 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.976 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.976 21:07:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.977 21:07:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.977 21:07:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.977 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.977 21:07:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.236 00:20:57.236 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.236 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.236 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.495 { 00:20:57.495 "cntlid": 43, 00:20:57.495 "qid": 0, 00:20:57.495 "state": "enabled", 00:20:57.495 "listen_address": { 00:20:57.495 "trtype": "RDMA", 00:20:57.495 "adrfam": "IPv4", 00:20:57.495 "traddr": "192.168.100.8", 00:20:57.495 "trsvcid": "4420" 00:20:57.495 }, 00:20:57.495 "peer_address": { 00:20:57.495 "trtype": "RDMA", 00:20:57.495 "adrfam": "IPv4", 00:20:57.495 "traddr": "192.168.100.8", 00:20:57.495 "trsvcid": "35953" 00:20:57.495 }, 00:20:57.495 "auth": { 00:20:57.495 "state": "completed", 00:20:57.495 "digest": "sha256", 00:20:57.495 "dhgroup": "ffdhe8192" 00:20:57.495 } 00:20:57.495 } 00:20:57.495 ]' 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.495 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.755 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.755 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.755 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.755 21:07:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:20:58.320 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.579 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:58.579 21:07:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.579 21:07:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.579 21:07:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.579 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.579 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.579 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.838 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.097 00:20:59.097 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.097 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.097 21:07:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.356 { 00:20:59.356 "cntlid": 45, 00:20:59.356 "qid": 0, 00:20:59.356 "state": "enabled", 00:20:59.356 "listen_address": { 00:20:59.356 "trtype": "RDMA", 00:20:59.356 "adrfam": "IPv4", 00:20:59.356 "traddr": "192.168.100.8", 00:20:59.356 "trsvcid": "4420" 00:20:59.356 }, 00:20:59.356 "peer_address": { 00:20:59.356 "trtype": "RDMA", 00:20:59.356 "adrfam": "IPv4", 00:20:59.356 "traddr": "192.168.100.8", 00:20:59.356 "trsvcid": "44383" 00:20:59.356 }, 00:20:59.356 "auth": { 00:20:59.356 "state": "completed", 00:20:59.356 "digest": "sha256", 00:20:59.356 "dhgroup": "ffdhe8192" 00:20:59.356 } 00:20:59.356 } 00:20:59.356 ]' 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.356 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.615 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.615 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.615 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.615 21:07:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:21:00.281 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.539 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.106 00:21:01.106 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.106 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.106 21:07:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.366 { 00:21:01.366 "cntlid": 47, 00:21:01.366 "qid": 0, 00:21:01.366 "state": "enabled", 00:21:01.366 "listen_address": { 00:21:01.366 "trtype": "RDMA", 00:21:01.366 "adrfam": "IPv4", 00:21:01.366 "traddr": "192.168.100.8", 00:21:01.366 "trsvcid": "4420" 00:21:01.366 }, 00:21:01.366 "peer_address": { 00:21:01.366 "trtype": "RDMA", 00:21:01.366 "adrfam": "IPv4", 00:21:01.366 "traddr": "192.168.100.8", 00:21:01.366 "trsvcid": "60169" 00:21:01.366 }, 00:21:01.366 "auth": { 00:21:01.366 "state": "completed", 00:21:01.366 "digest": "sha256", 00:21:01.366 "dhgroup": "ffdhe8192" 00:21:01.366 } 00:21:01.366 } 00:21:01.366 ]' 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.366 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.626 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:21:02.193 21:07:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.451 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:02.451 21:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.451 21:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.452 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.709 00:21:02.709 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.709 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.709 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.967 { 00:21:02.967 "cntlid": 49, 00:21:02.967 "qid": 0, 00:21:02.967 "state": "enabled", 00:21:02.967 "listen_address": { 00:21:02.967 "trtype": "RDMA", 00:21:02.967 "adrfam": "IPv4", 00:21:02.967 "traddr": "192.168.100.8", 00:21:02.967 "trsvcid": "4420" 00:21:02.967 }, 00:21:02.967 "peer_address": { 00:21:02.967 "trtype": "RDMA", 00:21:02.967 "adrfam": "IPv4", 00:21:02.967 "traddr": "192.168.100.8", 00:21:02.967 "trsvcid": "54149" 00:21:02.967 }, 00:21:02.967 "auth": { 00:21:02.967 "state": "completed", 00:21:02.967 "digest": "sha384", 00:21:02.967 "dhgroup": "null" 00:21:02.967 } 00:21:02.967 } 00:21:02.967 ]' 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:02.967 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.225 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.225 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.225 21:07:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.225 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:21:03.791 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.049 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:04.049 21:07:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.049 21:07:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.049 21:07:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.049 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.049 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.049 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.308 21:07:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.308 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.567 { 00:21:04.567 "cntlid": 51, 00:21:04.567 "qid": 0, 00:21:04.567 "state": "enabled", 00:21:04.567 "listen_address": { 00:21:04.567 "trtype": "RDMA", 00:21:04.567 "adrfam": "IPv4", 00:21:04.567 "traddr": "192.168.100.8", 00:21:04.567 "trsvcid": "4420" 00:21:04.567 }, 00:21:04.567 "peer_address": { 00:21:04.567 "trtype": "RDMA", 00:21:04.567 "adrfam": "IPv4", 00:21:04.567 "traddr": "192.168.100.8", 00:21:04.567 "trsvcid": "47842" 00:21:04.567 }, 00:21:04.567 "auth": { 00:21:04.567 "state": "completed", 00:21:04.567 "digest": "sha384", 00:21:04.567 "dhgroup": "null" 00:21:04.567 } 00:21:04.567 } 00:21:04.567 ]' 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.567 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.826 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:04.826 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.826 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.826 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.826 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.826 21:07:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.773 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.774 21:07:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.774 21:07:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.774 21:07:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.774 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.774 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.031 00:21:06.031 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.031 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.031 21:07:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.290 { 00:21:06.290 "cntlid": 53, 00:21:06.290 "qid": 0, 00:21:06.290 "state": "enabled", 00:21:06.290 "listen_address": { 00:21:06.290 "trtype": "RDMA", 00:21:06.290 "adrfam": "IPv4", 00:21:06.290 "traddr": "192.168.100.8", 00:21:06.290 "trsvcid": "4420" 00:21:06.290 }, 00:21:06.290 "peer_address": { 00:21:06.290 "trtype": "RDMA", 00:21:06.290 "adrfam": "IPv4", 00:21:06.290 "traddr": "192.168.100.8", 00:21:06.290 "trsvcid": "40162" 00:21:06.290 }, 00:21:06.290 "auth": { 00:21:06.290 "state": "completed", 00:21:06.290 "digest": "sha384", 00:21:06.290 "dhgroup": "null" 00:21:06.290 } 00:21:06.290 } 00:21:06.290 ]' 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.290 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.549 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:21:07.117 21:07:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.376 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.636 00:21:07.636 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.636 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.636 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.895 { 00:21:07.895 "cntlid": 55, 00:21:07.895 "qid": 0, 00:21:07.895 "state": "enabled", 00:21:07.895 "listen_address": { 00:21:07.895 "trtype": "RDMA", 00:21:07.895 "adrfam": "IPv4", 00:21:07.895 "traddr": "192.168.100.8", 00:21:07.895 "trsvcid": "4420" 00:21:07.895 }, 00:21:07.895 "peer_address": { 00:21:07.895 "trtype": "RDMA", 00:21:07.895 "adrfam": "IPv4", 00:21:07.895 "traddr": "192.168.100.8", 00:21:07.895 "trsvcid": "37873" 00:21:07.895 }, 00:21:07.895 "auth": { 00:21:07.895 "state": "completed", 00:21:07.895 "digest": "sha384", 00:21:07.895 "dhgroup": "null" 00:21:07.895 } 00:21:07.895 } 00:21:07.895 ]' 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.895 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.154 21:07:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:21:08.722 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.981 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.982 21:07:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.241 00:21:09.241 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.241 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.241 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.499 { 00:21:09.499 "cntlid": 57, 00:21:09.499 "qid": 0, 00:21:09.499 "state": "enabled", 00:21:09.499 "listen_address": { 00:21:09.499 "trtype": "RDMA", 00:21:09.499 "adrfam": "IPv4", 00:21:09.499 "traddr": "192.168.100.8", 00:21:09.499 "trsvcid": "4420" 00:21:09.499 }, 00:21:09.499 "peer_address": { 00:21:09.499 "trtype": "RDMA", 00:21:09.499 "adrfam": "IPv4", 00:21:09.499 "traddr": "192.168.100.8", 00:21:09.499 "trsvcid": "38654" 00:21:09.499 }, 00:21:09.499 "auth": { 00:21:09.499 "state": "completed", 00:21:09.499 "digest": "sha384", 00:21:09.499 "dhgroup": "ffdhe2048" 00:21:09.499 } 00:21:09.499 } 00:21:09.499 ]' 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.499 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.758 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.758 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.758 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.758 21:08:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:21:10.324 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.582 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:10.582 21:08:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.582 21:08:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.582 21:08:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.582 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.582 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.582 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.841 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.099 00:21:11.099 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.099 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.099 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.099 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.099 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.099 21:08:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.099 21:08:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.099 21:08:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.099 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.099 { 00:21:11.099 "cntlid": 59, 00:21:11.099 "qid": 0, 00:21:11.099 "state": "enabled", 00:21:11.099 "listen_address": { 00:21:11.099 "trtype": "RDMA", 00:21:11.099 "adrfam": "IPv4", 00:21:11.099 "traddr": "192.168.100.8", 00:21:11.099 "trsvcid": "4420" 00:21:11.099 }, 00:21:11.099 "peer_address": { 00:21:11.099 "trtype": "RDMA", 00:21:11.099 "adrfam": "IPv4", 00:21:11.099 "traddr": "192.168.100.8", 00:21:11.099 "trsvcid": "53415" 00:21:11.099 }, 00:21:11.099 "auth": { 00:21:11.099 "state": "completed", 00:21:11.099 "digest": "sha384", 00:21:11.099 "dhgroup": "ffdhe2048" 00:21:11.099 } 00:21:11.099 } 00:21:11.099 ]' 00:21:11.100 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.358 21:08:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.358 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.358 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.358 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.358 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.358 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.358 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.616 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:21:12.183 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.183 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:12.183 21:08:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.183 21:08:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.183 21:08:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.183 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.183 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:12.183 21:08:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.443 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.701 00:21:12.701 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.701 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.701 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.958 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.958 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.958 21:08:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.958 21:08:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.958 21:08:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.958 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.958 { 00:21:12.958 "cntlid": 61, 00:21:12.958 "qid": 0, 00:21:12.958 "state": "enabled", 00:21:12.958 "listen_address": { 00:21:12.958 "trtype": "RDMA", 00:21:12.958 "adrfam": "IPv4", 00:21:12.958 "traddr": "192.168.100.8", 00:21:12.959 "trsvcid": "4420" 00:21:12.959 }, 00:21:12.959 "peer_address": { 00:21:12.959 "trtype": "RDMA", 00:21:12.959 "adrfam": "IPv4", 00:21:12.959 "traddr": "192.168.100.8", 00:21:12.959 "trsvcid": "37048" 00:21:12.959 }, 00:21:12.959 "auth": { 00:21:12.959 "state": "completed", 00:21:12.959 "digest": "sha384", 00:21:12.959 "dhgroup": "ffdhe2048" 00:21:12.959 } 00:21:12.959 } 00:21:12.959 ]' 00:21:12.959 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.959 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.959 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.959 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.959 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.959 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.959 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.959 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.217 21:08:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:21:13.782 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.783 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:13.783 21:08:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.783 21:08:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.783 21:08:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.783 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.783 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.783 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:14.040 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.041 21:08:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.302 00:21:14.302 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.302 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.302 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.614 { 00:21:14.614 "cntlid": 63, 00:21:14.614 "qid": 0, 00:21:14.614 "state": "enabled", 00:21:14.614 "listen_address": { 00:21:14.614 "trtype": "RDMA", 00:21:14.614 "adrfam": "IPv4", 00:21:14.614 "traddr": "192.168.100.8", 00:21:14.614 "trsvcid": "4420" 00:21:14.614 }, 00:21:14.614 "peer_address": { 00:21:14.614 "trtype": "RDMA", 00:21:14.614 "adrfam": "IPv4", 00:21:14.614 "traddr": "192.168.100.8", 00:21:14.614 "trsvcid": "33861" 00:21:14.614 }, 00:21:14.614 "auth": { 00:21:14.614 "state": "completed", 00:21:14.614 "digest": "sha384", 00:21:14.614 "dhgroup": "ffdhe2048" 00:21:14.614 } 00:21:14.614 } 00:21:14.614 ]' 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.614 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.881 21:08:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:21:15.450 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.450 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:15.450 21:08:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.450 21:08:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.450 21:08:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.450 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.450 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.450 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.450 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.709 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.968 00:21:15.968 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.968 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.968 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.226 { 00:21:16.226 "cntlid": 65, 00:21:16.226 "qid": 0, 00:21:16.226 "state": "enabled", 00:21:16.226 "listen_address": { 00:21:16.226 "trtype": "RDMA", 00:21:16.226 "adrfam": "IPv4", 00:21:16.226 "traddr": "192.168.100.8", 00:21:16.226 "trsvcid": "4420" 00:21:16.226 }, 00:21:16.226 "peer_address": { 00:21:16.226 "trtype": "RDMA", 00:21:16.226 "adrfam": "IPv4", 00:21:16.226 "traddr": "192.168.100.8", 00:21:16.226 "trsvcid": "37627" 00:21:16.226 }, 00:21:16.226 "auth": { 00:21:16.226 "state": "completed", 00:21:16.226 "digest": "sha384", 00:21:16.226 "dhgroup": "ffdhe3072" 00:21:16.226 } 00:21:16.226 } 00:21:16.226 ]' 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.226 21:08:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.226 21:08:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.226 21:08:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.226 21:08:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.484 21:08:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:21:17.052 21:08:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.052 21:08:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:17.052 21:08:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.052 21:08:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.052 21:08:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.052 21:08:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.052 21:08:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.052 21:08:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.311 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:17.311 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.312 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.571 00:21:17.571 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.571 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.571 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.830 { 00:21:17.830 "cntlid": 67, 00:21:17.830 "qid": 0, 00:21:17.830 "state": "enabled", 00:21:17.830 "listen_address": { 00:21:17.830 "trtype": "RDMA", 00:21:17.830 "adrfam": "IPv4", 00:21:17.830 "traddr": "192.168.100.8", 00:21:17.830 "trsvcid": "4420" 00:21:17.830 }, 00:21:17.830 "peer_address": { 00:21:17.830 "trtype": "RDMA", 00:21:17.830 "adrfam": "IPv4", 00:21:17.830 "traddr": "192.168.100.8", 00:21:17.830 "trsvcid": "60616" 00:21:17.830 }, 00:21:17.830 "auth": { 00:21:17.830 "state": "completed", 00:21:17.830 "digest": "sha384", 00:21:17.830 "dhgroup": "ffdhe3072" 00:21:17.830 } 00:21:17.830 } 00:21:17.830 ]' 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.830 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.089 21:08:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:21:18.657 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.916 21:08:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.175 00:21:19.175 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.175 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.175 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.434 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.435 { 00:21:19.435 "cntlid": 69, 00:21:19.435 "qid": 0, 00:21:19.435 "state": "enabled", 00:21:19.435 "listen_address": { 00:21:19.435 "trtype": "RDMA", 00:21:19.435 "adrfam": "IPv4", 00:21:19.435 "traddr": "192.168.100.8", 00:21:19.435 "trsvcid": "4420" 00:21:19.435 }, 00:21:19.435 "peer_address": { 00:21:19.435 "trtype": "RDMA", 00:21:19.435 "adrfam": "IPv4", 00:21:19.435 "traddr": "192.168.100.8", 00:21:19.435 "trsvcid": "49686" 00:21:19.435 }, 00:21:19.435 "auth": { 00:21:19.435 "state": "completed", 00:21:19.435 "digest": "sha384", 00:21:19.435 "dhgroup": "ffdhe3072" 00:21:19.435 } 00:21:19.435 } 00:21:19.435 ]' 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.435 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.694 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.694 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.694 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.694 21:08:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:21:20.262 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.522 21:08:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.781 21:08:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.781 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.781 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.781 00:21:20.781 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.781 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.781 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.041 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.041 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.041 21:08:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.041 21:08:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.041 21:08:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.041 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.041 { 00:21:21.041 "cntlid": 71, 00:21:21.041 "qid": 0, 00:21:21.041 "state": "enabled", 00:21:21.041 "listen_address": { 00:21:21.041 "trtype": "RDMA", 00:21:21.041 "adrfam": "IPv4", 00:21:21.041 "traddr": "192.168.100.8", 00:21:21.041 "trsvcid": "4420" 00:21:21.041 }, 00:21:21.041 "peer_address": { 00:21:21.041 "trtype": "RDMA", 00:21:21.041 "adrfam": "IPv4", 00:21:21.041 "traddr": "192.168.100.8", 00:21:21.041 "trsvcid": "48902" 00:21:21.041 }, 00:21:21.041 "auth": { 00:21:21.041 "state": "completed", 00:21:21.041 "digest": "sha384", 00:21:21.041 "dhgroup": "ffdhe3072" 00:21:21.041 } 00:21:21.041 } 00:21:21.041 ]' 00:21:21.041 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.041 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.041 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.299 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.299 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.299 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.299 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.300 21:08:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.300 21:08:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:21:22.236 21:08:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.236 21:08:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:22.236 21:08:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.236 21:08:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.236 21:08:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.236 21:08:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.236 21:08:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.236 21:08:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.236 21:08:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.236 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.499 00:21:22.500 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.500 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.500 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.759 { 00:21:22.759 "cntlid": 73, 00:21:22.759 "qid": 0, 00:21:22.759 "state": "enabled", 00:21:22.759 "listen_address": { 00:21:22.759 "trtype": "RDMA", 00:21:22.759 "adrfam": "IPv4", 00:21:22.759 "traddr": "192.168.100.8", 00:21:22.759 "trsvcid": "4420" 00:21:22.759 }, 00:21:22.759 "peer_address": { 00:21:22.759 "trtype": "RDMA", 00:21:22.759 "adrfam": "IPv4", 00:21:22.759 "traddr": "192.168.100.8", 00:21:22.759 "trsvcid": "40806" 00:21:22.759 }, 00:21:22.759 "auth": { 00:21:22.759 "state": "completed", 00:21:22.759 "digest": "sha384", 00:21:22.759 "dhgroup": "ffdhe4096" 00:21:22.759 } 00:21:22.759 } 00:21:22.759 ]' 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.759 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.016 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.016 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.016 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.016 21:08:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:21:23.582 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.840 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:23.840 21:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.840 21:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.840 21:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.840 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.840 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.840 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.099 21:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.358 00:21:24.358 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.358 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.358 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.358 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.358 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.358 21:08:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.359 21:08:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.359 21:08:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.359 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.359 { 00:21:24.359 "cntlid": 75, 00:21:24.359 "qid": 0, 00:21:24.359 "state": "enabled", 00:21:24.359 "listen_address": { 00:21:24.359 "trtype": "RDMA", 00:21:24.359 "adrfam": "IPv4", 00:21:24.359 "traddr": "192.168.100.8", 00:21:24.359 "trsvcid": "4420" 00:21:24.359 }, 00:21:24.359 "peer_address": { 00:21:24.359 "trtype": "RDMA", 00:21:24.359 "adrfam": "IPv4", 00:21:24.359 "traddr": "192.168.100.8", 00:21:24.359 "trsvcid": "38708" 00:21:24.359 }, 00:21:24.359 "auth": { 00:21:24.359 "state": "completed", 00:21:24.359 "digest": "sha384", 00:21:24.359 "dhgroup": "ffdhe4096" 00:21:24.359 } 00:21:24.359 } 00:21:24.359 ]' 00:21:24.359 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.618 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.618 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.618 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.618 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.618 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.618 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.618 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.877 21:08:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:21:25.445 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.445 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:25.445 21:08:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.445 21:08:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.445 21:08:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.445 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.445 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.445 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.703 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:25.703 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.703 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:25.703 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:25.704 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:25.704 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.704 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.704 21:08:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.704 21:08:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.704 21:08:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.704 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.704 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.962 00:21:25.962 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.962 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.962 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.221 { 00:21:26.221 "cntlid": 77, 00:21:26.221 "qid": 0, 00:21:26.221 "state": "enabled", 00:21:26.221 "listen_address": { 00:21:26.221 "trtype": "RDMA", 00:21:26.221 "adrfam": "IPv4", 00:21:26.221 "traddr": "192.168.100.8", 00:21:26.221 "trsvcid": "4420" 00:21:26.221 }, 00:21:26.221 "peer_address": { 00:21:26.221 "trtype": "RDMA", 00:21:26.221 "adrfam": "IPv4", 00:21:26.221 "traddr": "192.168.100.8", 00:21:26.221 "trsvcid": "34211" 00:21:26.221 }, 00:21:26.221 "auth": { 00:21:26.221 "state": "completed", 00:21:26.221 "digest": "sha384", 00:21:26.221 "dhgroup": "ffdhe4096" 00:21:26.221 } 00:21:26.221 } 00:21:26.221 ]' 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.221 21:08:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.221 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.221 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.221 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.221 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.479 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:21:27.046 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.046 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:27.046 21:08:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.046 21:08:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.306 21:08:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.306 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.306 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:27.306 21:08:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.306 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.565 00:21:27.565 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.565 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.565 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.823 { 00:21:27.823 "cntlid": 79, 00:21:27.823 "qid": 0, 00:21:27.823 "state": "enabled", 00:21:27.823 "listen_address": { 00:21:27.823 "trtype": "RDMA", 00:21:27.823 "adrfam": "IPv4", 00:21:27.823 "traddr": "192.168.100.8", 00:21:27.823 "trsvcid": "4420" 00:21:27.823 }, 00:21:27.823 "peer_address": { 00:21:27.823 "trtype": "RDMA", 00:21:27.823 "adrfam": "IPv4", 00:21:27.823 "traddr": "192.168.100.8", 00:21:27.823 "trsvcid": "51823" 00:21:27.823 }, 00:21:27.823 "auth": { 00:21:27.823 "state": "completed", 00:21:27.823 "digest": "sha384", 00:21:27.823 "dhgroup": "ffdhe4096" 00:21:27.823 } 00:21:27.823 } 00:21:27.823 ]' 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.823 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.081 21:08:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:21:28.649 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.908 21:08:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.168 21:08:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.168 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.168 21:08:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.427 00:21:29.427 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.427 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.428 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.687 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.687 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.687 21:08:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.687 21:08:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.687 21:08:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.687 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.687 { 00:21:29.687 "cntlid": 81, 00:21:29.687 "qid": 0, 00:21:29.687 "state": "enabled", 00:21:29.687 "listen_address": { 00:21:29.687 "trtype": "RDMA", 00:21:29.687 "adrfam": "IPv4", 00:21:29.687 "traddr": "192.168.100.8", 00:21:29.687 "trsvcid": "4420" 00:21:29.687 }, 00:21:29.687 "peer_address": { 00:21:29.687 "trtype": "RDMA", 00:21:29.687 "adrfam": "IPv4", 00:21:29.687 "traddr": "192.168.100.8", 00:21:29.687 "trsvcid": "55217" 00:21:29.687 }, 00:21:29.687 "auth": { 00:21:29.687 "state": "completed", 00:21:29.687 "digest": "sha384", 00:21:29.687 "dhgroup": "ffdhe6144" 00:21:29.687 } 00:21:29.687 } 00:21:29.687 ]' 00:21:29.688 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.688 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.688 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.688 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.688 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.688 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.688 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.688 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.947 21:08:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:21:30.518 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.518 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:30.518 21:08:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.518 21:08:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.518 21:08:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.518 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.518 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.518 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.777 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.035 00:21:31.035 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.035 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.035 21:08:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.293 { 00:21:31.293 "cntlid": 83, 00:21:31.293 "qid": 0, 00:21:31.293 "state": "enabled", 00:21:31.293 "listen_address": { 00:21:31.293 "trtype": "RDMA", 00:21:31.293 "adrfam": "IPv4", 00:21:31.293 "traddr": "192.168.100.8", 00:21:31.293 "trsvcid": "4420" 00:21:31.293 }, 00:21:31.293 "peer_address": { 00:21:31.293 "trtype": "RDMA", 00:21:31.293 "adrfam": "IPv4", 00:21:31.293 "traddr": "192.168.100.8", 00:21:31.293 "trsvcid": "60650" 00:21:31.293 }, 00:21:31.293 "auth": { 00:21:31.293 "state": "completed", 00:21:31.293 "digest": "sha384", 00:21:31.293 "dhgroup": "ffdhe6144" 00:21:31.293 } 00:21:31.293 } 00:21:31.293 ]' 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:31.293 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.551 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.551 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.551 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.551 21:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.569 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.826 00:21:32.826 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.826 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.827 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.084 { 00:21:33.084 "cntlid": 85, 00:21:33.084 "qid": 0, 00:21:33.084 "state": "enabled", 00:21:33.084 "listen_address": { 00:21:33.084 "trtype": "RDMA", 00:21:33.084 "adrfam": "IPv4", 00:21:33.084 "traddr": "192.168.100.8", 00:21:33.084 "trsvcid": "4420" 00:21:33.084 }, 00:21:33.084 "peer_address": { 00:21:33.084 "trtype": "RDMA", 00:21:33.084 "adrfam": "IPv4", 00:21:33.084 "traddr": "192.168.100.8", 00:21:33.084 "trsvcid": "51788" 00:21:33.084 }, 00:21:33.084 "auth": { 00:21:33.084 "state": "completed", 00:21:33.084 "digest": "sha384", 00:21:33.084 "dhgroup": "ffdhe6144" 00:21:33.084 } 00:21:33.084 } 00:21:33.084 ]' 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.084 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.342 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.342 21:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.342 21:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:21:33.908 21:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.166 21:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:34.166 21:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.166 21:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.166 21:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.166 21:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.166 21:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:34.166 21:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.425 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.684 00:21:34.684 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.684 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.684 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.942 { 00:21:34.942 "cntlid": 87, 00:21:34.942 "qid": 0, 00:21:34.942 "state": "enabled", 00:21:34.942 "listen_address": { 00:21:34.942 "trtype": "RDMA", 00:21:34.942 "adrfam": "IPv4", 00:21:34.942 "traddr": "192.168.100.8", 00:21:34.942 "trsvcid": "4420" 00:21:34.942 }, 00:21:34.942 "peer_address": { 00:21:34.942 "trtype": "RDMA", 00:21:34.942 "adrfam": "IPv4", 00:21:34.942 "traddr": "192.168.100.8", 00:21:34.942 "trsvcid": "55663" 00:21:34.942 }, 00:21:34.942 "auth": { 00:21:34.942 "state": "completed", 00:21:34.942 "digest": "sha384", 00:21:34.942 "dhgroup": "ffdhe6144" 00:21:34.942 } 00:21:34.942 } 00:21:34.942 ]' 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.942 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.201 21:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:21:35.767 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.767 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:35.767 21:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.767 21:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.767 21:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.767 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.767 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.767 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:35.767 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.026 21:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.593 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.593 { 00:21:36.593 "cntlid": 89, 00:21:36.593 "qid": 0, 00:21:36.593 "state": "enabled", 00:21:36.593 "listen_address": { 00:21:36.593 "trtype": "RDMA", 00:21:36.593 "adrfam": "IPv4", 00:21:36.593 "traddr": "192.168.100.8", 00:21:36.593 "trsvcid": "4420" 00:21:36.593 }, 00:21:36.593 "peer_address": { 00:21:36.593 "trtype": "RDMA", 00:21:36.593 "adrfam": "IPv4", 00:21:36.593 "traddr": "192.168.100.8", 00:21:36.593 "trsvcid": "39218" 00:21:36.593 }, 00:21:36.593 "auth": { 00:21:36.593 "state": "completed", 00:21:36.593 "digest": "sha384", 00:21:36.593 "dhgroup": "ffdhe8192" 00:21:36.593 } 00:21:36.593 } 00:21:36.593 ]' 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.593 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.852 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.852 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.852 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.852 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.852 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.110 21:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:21:37.677 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.678 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:37.678 21:08:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.678 21:08:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.678 21:08:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.678 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.678 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.678 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.936 21:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.197 00:21:38.197 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.197 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.197 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.457 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.457 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.457 21:08:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.457 21:08:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.457 21:08:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.457 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.457 { 00:21:38.457 "cntlid": 91, 00:21:38.457 "qid": 0, 00:21:38.457 "state": "enabled", 00:21:38.457 "listen_address": { 00:21:38.457 "trtype": "RDMA", 00:21:38.457 "adrfam": "IPv4", 00:21:38.457 "traddr": "192.168.100.8", 00:21:38.457 "trsvcid": "4420" 00:21:38.457 }, 00:21:38.457 "peer_address": { 00:21:38.457 "trtype": "RDMA", 00:21:38.457 "adrfam": "IPv4", 00:21:38.457 "traddr": "192.168.100.8", 00:21:38.457 "trsvcid": "58399" 00:21:38.457 }, 00:21:38.457 "auth": { 00:21:38.457 "state": "completed", 00:21:38.457 "digest": "sha384", 00:21:38.457 "dhgroup": "ffdhe8192" 00:21:38.457 } 00:21:38.457 } 00:21:38.457 ]' 00:21:38.457 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.457 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.457 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.717 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.717 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.717 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.717 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.717 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.718 21:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.657 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.226 00:21:40.226 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.226 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.226 21:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.485 { 00:21:40.485 "cntlid": 93, 00:21:40.485 "qid": 0, 00:21:40.485 "state": "enabled", 00:21:40.485 "listen_address": { 00:21:40.485 "trtype": "RDMA", 00:21:40.485 "adrfam": "IPv4", 00:21:40.485 "traddr": "192.168.100.8", 00:21:40.485 "trsvcid": "4420" 00:21:40.485 }, 00:21:40.485 "peer_address": { 00:21:40.485 "trtype": "RDMA", 00:21:40.485 "adrfam": "IPv4", 00:21:40.485 "traddr": "192.168.100.8", 00:21:40.485 "trsvcid": "37057" 00:21:40.485 }, 00:21:40.485 "auth": { 00:21:40.485 "state": "completed", 00:21:40.485 "digest": "sha384", 00:21:40.485 "dhgroup": "ffdhe8192" 00:21:40.485 } 00:21:40.485 } 00:21:40.485 ]' 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.485 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.744 21:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:21:41.313 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.313 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:41.313 21:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.313 21:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.313 21:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.313 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.313 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:41.313 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.573 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.167 00:21:42.167 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.167 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.167 21:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.167 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.167 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.167 21:08:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.167 21:08:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.167 21:08:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.167 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.167 { 00:21:42.167 "cntlid": 95, 00:21:42.167 "qid": 0, 00:21:42.167 "state": "enabled", 00:21:42.167 "listen_address": { 00:21:42.167 "trtype": "RDMA", 00:21:42.167 "adrfam": "IPv4", 00:21:42.167 "traddr": "192.168.100.8", 00:21:42.167 "trsvcid": "4420" 00:21:42.167 }, 00:21:42.167 "peer_address": { 00:21:42.167 "trtype": "RDMA", 00:21:42.167 "adrfam": "IPv4", 00:21:42.167 "traddr": "192.168.100.8", 00:21:42.167 "trsvcid": "33359" 00:21:42.167 }, 00:21:42.167 "auth": { 00:21:42.167 "state": "completed", 00:21:42.167 "digest": "sha384", 00:21:42.167 "dhgroup": "ffdhe8192" 00:21:42.167 } 00:21:42.167 } 00:21:42.167 ]' 00:21:42.167 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.426 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.426 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.426 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.426 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.426 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.426 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.426 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.684 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:21:43.252 21:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.252 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:43.252 21:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.252 21:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.252 21:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.252 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:43.252 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.252 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.252 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.252 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.511 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.512 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.772 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.772 { 00:21:43.772 "cntlid": 97, 00:21:43.772 "qid": 0, 00:21:43.772 "state": "enabled", 00:21:43.772 "listen_address": { 00:21:43.772 "trtype": "RDMA", 00:21:43.772 "adrfam": "IPv4", 00:21:43.772 "traddr": "192.168.100.8", 00:21:43.772 "trsvcid": "4420" 00:21:43.772 }, 00:21:43.772 "peer_address": { 00:21:43.772 "trtype": "RDMA", 00:21:43.772 "adrfam": "IPv4", 00:21:43.772 "traddr": "192.168.100.8", 00:21:43.772 "trsvcid": "39696" 00:21:43.772 }, 00:21:43.772 "auth": { 00:21:43.772 "state": "completed", 00:21:43.772 "digest": "sha512", 00:21:43.772 "dhgroup": "null" 00:21:43.772 } 00:21:43.772 } 00:21:43.772 ]' 00:21:43.772 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.031 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.031 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.031 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:44.031 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.031 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.031 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.031 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.289 21:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:21:44.856 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.856 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:44.856 21:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.856 21:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.856 21:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.856 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.856 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.856 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.115 21:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.374 00:21:45.374 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.374 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.374 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.374 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.374 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.374 21:08:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.374 21:08:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.374 21:08:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.635 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.635 { 00:21:45.635 "cntlid": 99, 00:21:45.635 "qid": 0, 00:21:45.635 "state": "enabled", 00:21:45.635 "listen_address": { 00:21:45.635 "trtype": "RDMA", 00:21:45.635 "adrfam": "IPv4", 00:21:45.635 "traddr": "192.168.100.8", 00:21:45.635 "trsvcid": "4420" 00:21:45.635 }, 00:21:45.635 "peer_address": { 00:21:45.635 "trtype": "RDMA", 00:21:45.635 "adrfam": "IPv4", 00:21:45.635 "traddr": "192.168.100.8", 00:21:45.635 "trsvcid": "60596" 00:21:45.635 }, 00:21:45.635 "auth": { 00:21:45.635 "state": "completed", 00:21:45.635 "digest": "sha512", 00:21:45.635 "dhgroup": "null" 00:21:45.635 } 00:21:45.635 } 00:21:45.635 ]' 00:21:45.635 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.635 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.635 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.635 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:45.635 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.635 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.635 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.635 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.895 21:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:21:46.465 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.465 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:46.465 21:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.465 21:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.465 21:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.465 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.465 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.465 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.725 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.985 00:21:46.986 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.986 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.986 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.246 { 00:21:47.246 "cntlid": 101, 00:21:47.246 "qid": 0, 00:21:47.246 "state": "enabled", 00:21:47.246 "listen_address": { 00:21:47.246 "trtype": "RDMA", 00:21:47.246 "adrfam": "IPv4", 00:21:47.246 "traddr": "192.168.100.8", 00:21:47.246 "trsvcid": "4420" 00:21:47.246 }, 00:21:47.246 "peer_address": { 00:21:47.246 "trtype": "RDMA", 00:21:47.246 "adrfam": "IPv4", 00:21:47.246 "traddr": "192.168.100.8", 00:21:47.246 "trsvcid": "38505" 00:21:47.246 }, 00:21:47.246 "auth": { 00:21:47.246 "state": "completed", 00:21:47.246 "digest": "sha512", 00:21:47.246 "dhgroup": "null" 00:21:47.246 } 00:21:47.246 } 00:21:47.246 ]' 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:47.246 21:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.246 21:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.246 21:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.246 21:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.506 21:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:21:48.075 21:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.075 21:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:48.075 21:08:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.075 21:08:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.075 21:08:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.075 21:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.075 21:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:48.075 21:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.335 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.594 00:21:48.594 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.594 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.594 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.892 { 00:21:48.892 "cntlid": 103, 00:21:48.892 "qid": 0, 00:21:48.892 "state": "enabled", 00:21:48.892 "listen_address": { 00:21:48.892 "trtype": "RDMA", 00:21:48.892 "adrfam": "IPv4", 00:21:48.892 "traddr": "192.168.100.8", 00:21:48.892 "trsvcid": "4420" 00:21:48.892 }, 00:21:48.892 "peer_address": { 00:21:48.892 "trtype": "RDMA", 00:21:48.892 "adrfam": "IPv4", 00:21:48.892 "traddr": "192.168.100.8", 00:21:48.892 "trsvcid": "38406" 00:21:48.892 }, 00:21:48.892 "auth": { 00:21:48.892 "state": "completed", 00:21:48.892 "digest": "sha512", 00:21:48.892 "dhgroup": "null" 00:21:48.892 } 00:21:48.892 } 00:21:48.892 ]' 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.892 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.171 21:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:21:49.741 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.000 21:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.261 00:21:50.261 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.261 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.261 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.521 { 00:21:50.521 "cntlid": 105, 00:21:50.521 "qid": 0, 00:21:50.521 "state": "enabled", 00:21:50.521 "listen_address": { 00:21:50.521 "trtype": "RDMA", 00:21:50.521 "adrfam": "IPv4", 00:21:50.521 "traddr": "192.168.100.8", 00:21:50.521 "trsvcid": "4420" 00:21:50.521 }, 00:21:50.521 "peer_address": { 00:21:50.521 "trtype": "RDMA", 00:21:50.521 "adrfam": "IPv4", 00:21:50.521 "traddr": "192.168.100.8", 00:21:50.521 "trsvcid": "46970" 00:21:50.521 }, 00:21:50.521 "auth": { 00:21:50.521 "state": "completed", 00:21:50.521 "digest": "sha512", 00:21:50.521 "dhgroup": "ffdhe2048" 00:21:50.521 } 00:21:50.521 } 00:21:50.521 ]' 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.521 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.779 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.779 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.779 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.779 21:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:21:51.345 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.603 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:51.603 21:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.603 21:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.603 21:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.603 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.603 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.603 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.862 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.120 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.120 { 00:21:52.120 "cntlid": 107, 00:21:52.120 "qid": 0, 00:21:52.120 "state": "enabled", 00:21:52.120 "listen_address": { 00:21:52.120 "trtype": "RDMA", 00:21:52.120 "adrfam": "IPv4", 00:21:52.120 "traddr": "192.168.100.8", 00:21:52.120 "trsvcid": "4420" 00:21:52.120 }, 00:21:52.120 "peer_address": { 00:21:52.120 "trtype": "RDMA", 00:21:52.120 "adrfam": "IPv4", 00:21:52.120 "traddr": "192.168.100.8", 00:21:52.120 "trsvcid": "37787" 00:21:52.120 }, 00:21:52.120 "auth": { 00:21:52.120 "state": "completed", 00:21:52.120 "digest": "sha512", 00:21:52.120 "dhgroup": "ffdhe2048" 00:21:52.120 } 00:21:52.120 } 00:21:52.120 ]' 00:21:52.120 21:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.120 21:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.120 21:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.378 21:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.378 21:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.379 21:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.379 21:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.379 21:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.636 21:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:21:53.203 21:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.203 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:53.203 21:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.203 21:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.203 21:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.203 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.203 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:53.203 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.461 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.462 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.720 00:21:53.720 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.720 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.720 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.720 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.720 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.720 21:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.720 21:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.980 { 00:21:53.980 "cntlid": 109, 00:21:53.980 "qid": 0, 00:21:53.980 "state": "enabled", 00:21:53.980 "listen_address": { 00:21:53.980 "trtype": "RDMA", 00:21:53.980 "adrfam": "IPv4", 00:21:53.980 "traddr": "192.168.100.8", 00:21:53.980 "trsvcid": "4420" 00:21:53.980 }, 00:21:53.980 "peer_address": { 00:21:53.980 "trtype": "RDMA", 00:21:53.980 "adrfam": "IPv4", 00:21:53.980 "traddr": "192.168.100.8", 00:21:53.980 "trsvcid": "38520" 00:21:53.980 }, 00:21:53.980 "auth": { 00:21:53.980 "state": "completed", 00:21:53.980 "digest": "sha512", 00:21:53.980 "dhgroup": "ffdhe2048" 00:21:53.980 } 00:21:53.980 } 00:21:53.980 ]' 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.980 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.240 21:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:21:54.809 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.809 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:54.809 21:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.809 21:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.809 21:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.809 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.809 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:54.809 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:55.068 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:55.068 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.068 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.069 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:55.069 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:55.069 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.069 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:55.069 21:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.069 21:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.069 21:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.069 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.069 21:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.328 00:21:55.328 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.328 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.328 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.587 { 00:21:55.587 "cntlid": 111, 00:21:55.587 "qid": 0, 00:21:55.587 "state": "enabled", 00:21:55.587 "listen_address": { 00:21:55.587 "trtype": "RDMA", 00:21:55.587 "adrfam": "IPv4", 00:21:55.587 "traddr": "192.168.100.8", 00:21:55.587 "trsvcid": "4420" 00:21:55.587 }, 00:21:55.587 "peer_address": { 00:21:55.587 "trtype": "RDMA", 00:21:55.587 "adrfam": "IPv4", 00:21:55.587 "traddr": "192.168.100.8", 00:21:55.587 "trsvcid": "60097" 00:21:55.587 }, 00:21:55.587 "auth": { 00:21:55.587 "state": "completed", 00:21:55.587 "digest": "sha512", 00:21:55.587 "dhgroup": "ffdhe2048" 00:21:55.587 } 00:21:55.587 } 00:21:55.587 ]' 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.587 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.846 21:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:21:56.414 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.414 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:56.414 21:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.414 21:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.414 21:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.414 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.414 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.414 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.414 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.673 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.932 00:21:56.932 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.932 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.932 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.192 { 00:21:57.192 "cntlid": 113, 00:21:57.192 "qid": 0, 00:21:57.192 "state": "enabled", 00:21:57.192 "listen_address": { 00:21:57.192 "trtype": "RDMA", 00:21:57.192 "adrfam": "IPv4", 00:21:57.192 "traddr": "192.168.100.8", 00:21:57.192 "trsvcid": "4420" 00:21:57.192 }, 00:21:57.192 "peer_address": { 00:21:57.192 "trtype": "RDMA", 00:21:57.192 "adrfam": "IPv4", 00:21:57.192 "traddr": "192.168.100.8", 00:21:57.192 "trsvcid": "33834" 00:21:57.192 }, 00:21:57.192 "auth": { 00:21:57.192 "state": "completed", 00:21:57.192 "digest": "sha512", 00:21:57.192 "dhgroup": "ffdhe3072" 00:21:57.192 } 00:21:57.192 } 00:21:57.192 ]' 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.192 21:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.192 21:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.192 21:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.192 21:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.450 21:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:21:58.017 21:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.275 21:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:58.276 21:08:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.276 21:08:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.276 21:08:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.276 21:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.276 21:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.276 21:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.276 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.535 00:21:58.535 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.535 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.535 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.793 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.793 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.793 21:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.794 { 00:21:58.794 "cntlid": 115, 00:21:58.794 "qid": 0, 00:21:58.794 "state": "enabled", 00:21:58.794 "listen_address": { 00:21:58.794 "trtype": "RDMA", 00:21:58.794 "adrfam": "IPv4", 00:21:58.794 "traddr": "192.168.100.8", 00:21:58.794 "trsvcid": "4420" 00:21:58.794 }, 00:21:58.794 "peer_address": { 00:21:58.794 "trtype": "RDMA", 00:21:58.794 "adrfam": "IPv4", 00:21:58.794 "traddr": "192.168.100.8", 00:21:58.794 "trsvcid": "33880" 00:21:58.794 }, 00:21:58.794 "auth": { 00:21:58.794 "state": "completed", 00:21:58.794 "digest": "sha512", 00:21:58.794 "dhgroup": "ffdhe3072" 00:21:58.794 } 00:21:58.794 } 00:21:58.794 ]' 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.794 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.052 21:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:21:59.619 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.878 21:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.137 21:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.137 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.137 21:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.137 00:22:00.137 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.137 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.137 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.395 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.395 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.395 21:08:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.395 21:08:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.395 21:08:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.395 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.395 { 00:22:00.395 "cntlid": 117, 00:22:00.395 "qid": 0, 00:22:00.395 "state": "enabled", 00:22:00.395 "listen_address": { 00:22:00.395 "trtype": "RDMA", 00:22:00.396 "adrfam": "IPv4", 00:22:00.396 "traddr": "192.168.100.8", 00:22:00.396 "trsvcid": "4420" 00:22:00.396 }, 00:22:00.396 "peer_address": { 00:22:00.396 "trtype": "RDMA", 00:22:00.396 "adrfam": "IPv4", 00:22:00.396 "traddr": "192.168.100.8", 00:22:00.396 "trsvcid": "41606" 00:22:00.396 }, 00:22:00.396 "auth": { 00:22:00.396 "state": "completed", 00:22:00.396 "digest": "sha512", 00:22:00.396 "dhgroup": "ffdhe3072" 00:22:00.396 } 00:22:00.396 } 00:22:00.396 ]' 00:22:00.396 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.396 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.396 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.654 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:00.655 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.655 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.655 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.655 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.655 21:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.593 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.852 00:22:01.852 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.852 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.852 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.111 { 00:22:02.111 "cntlid": 119, 00:22:02.111 "qid": 0, 00:22:02.111 "state": "enabled", 00:22:02.111 "listen_address": { 00:22:02.111 "trtype": "RDMA", 00:22:02.111 "adrfam": "IPv4", 00:22:02.111 "traddr": "192.168.100.8", 00:22:02.111 "trsvcid": "4420" 00:22:02.111 }, 00:22:02.111 "peer_address": { 00:22:02.111 "trtype": "RDMA", 00:22:02.111 "adrfam": "IPv4", 00:22:02.111 "traddr": "192.168.100.8", 00:22:02.111 "trsvcid": "44470" 00:22:02.111 }, 00:22:02.111 "auth": { 00:22:02.111 "state": "completed", 00:22:02.111 "digest": "sha512", 00:22:02.111 "dhgroup": "ffdhe3072" 00:22:02.111 } 00:22:02.111 } 00:22:02.111 ]' 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:02.111 21:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.371 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.371 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.371 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.371 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:22:02.938 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.196 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:03.196 21:08:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.196 21:08:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.196 21:08:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.196 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.196 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.196 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.196 21:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.456 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.715 00:22:03.715 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.715 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.715 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.715 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.715 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.715 21:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.715 21:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.715 21:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.715 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.715 { 00:22:03.715 "cntlid": 121, 00:22:03.715 "qid": 0, 00:22:03.716 "state": "enabled", 00:22:03.716 "listen_address": { 00:22:03.716 "trtype": "RDMA", 00:22:03.716 "adrfam": "IPv4", 00:22:03.716 "traddr": "192.168.100.8", 00:22:03.716 "trsvcid": "4420" 00:22:03.716 }, 00:22:03.716 "peer_address": { 00:22:03.716 "trtype": "RDMA", 00:22:03.716 "adrfam": "IPv4", 00:22:03.716 "traddr": "192.168.100.8", 00:22:03.716 "trsvcid": "49147" 00:22:03.716 }, 00:22:03.716 "auth": { 00:22:03.716 "state": "completed", 00:22:03.716 "digest": "sha512", 00:22:03.716 "dhgroup": "ffdhe4096" 00:22:03.716 } 00:22:03.716 } 00:22:03.716 ]' 00:22:03.716 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.716 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.716 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.975 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.975 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.975 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.975 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.975 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.975 21:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:22:04.912 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.913 21:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.172 00:22:05.172 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.172 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.172 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.432 { 00:22:05.432 "cntlid": 123, 00:22:05.432 "qid": 0, 00:22:05.432 "state": "enabled", 00:22:05.432 "listen_address": { 00:22:05.432 "trtype": "RDMA", 00:22:05.432 "adrfam": "IPv4", 00:22:05.432 "traddr": "192.168.100.8", 00:22:05.432 "trsvcid": "4420" 00:22:05.432 }, 00:22:05.432 "peer_address": { 00:22:05.432 "trtype": "RDMA", 00:22:05.432 "adrfam": "IPv4", 00:22:05.432 "traddr": "192.168.100.8", 00:22:05.432 "trsvcid": "56399" 00:22:05.432 }, 00:22:05.432 "auth": { 00:22:05.432 "state": "completed", 00:22:05.432 "digest": "sha512", 00:22:05.432 "dhgroup": "ffdhe4096" 00:22:05.432 } 00:22:05.432 } 00:22:05.432 ]' 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:05.432 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.698 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.698 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.698 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.698 21:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:22:06.301 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.561 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.820 00:22:06.820 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.820 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.820 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.079 { 00:22:07.079 "cntlid": 125, 00:22:07.079 "qid": 0, 00:22:07.079 "state": "enabled", 00:22:07.079 "listen_address": { 00:22:07.079 "trtype": "RDMA", 00:22:07.079 "adrfam": "IPv4", 00:22:07.079 "traddr": "192.168.100.8", 00:22:07.079 "trsvcid": "4420" 00:22:07.079 }, 00:22:07.079 "peer_address": { 00:22:07.079 "trtype": "RDMA", 00:22:07.079 "adrfam": "IPv4", 00:22:07.079 "traddr": "192.168.100.8", 00:22:07.079 "trsvcid": "37476" 00:22:07.079 }, 00:22:07.079 "auth": { 00:22:07.079 "state": "completed", 00:22:07.079 "digest": "sha512", 00:22:07.079 "dhgroup": "ffdhe4096" 00:22:07.079 } 00:22:07.079 } 00:22:07.079 ]' 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.079 21:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.339 21:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.339 21:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.339 21:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.339 21:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:22:07.908 21:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.168 21:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:08.168 21:08:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.168 21:08:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.168 21:08:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.168 21:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.168 21:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.168 21:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:08.426 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:08.426 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.426 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.426 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:08.426 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:08.426 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.426 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:08.427 21:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.427 21:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.427 21:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.427 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.427 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.685 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.685 { 00:22:08.685 "cntlid": 127, 00:22:08.685 "qid": 0, 00:22:08.685 "state": "enabled", 00:22:08.685 "listen_address": { 00:22:08.685 "trtype": "RDMA", 00:22:08.685 "adrfam": "IPv4", 00:22:08.685 "traddr": "192.168.100.8", 00:22:08.685 "trsvcid": "4420" 00:22:08.685 }, 00:22:08.685 "peer_address": { 00:22:08.685 "trtype": "RDMA", 00:22:08.685 "adrfam": "IPv4", 00:22:08.685 "traddr": "192.168.100.8", 00:22:08.685 "trsvcid": "40987" 00:22:08.685 }, 00:22:08.685 "auth": { 00:22:08.685 "state": "completed", 00:22:08.685 "digest": "sha512", 00:22:08.685 "dhgroup": "ffdhe4096" 00:22:08.685 } 00:22:08.685 } 00:22:08.685 ]' 00:22:08.685 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.945 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.945 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.945 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:08.945 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.945 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.945 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.945 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.203 21:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:22:09.771 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.771 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:09.771 21:09:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.771 21:09:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.771 21:09:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.771 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.771 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.771 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.771 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.032 21:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.290 00:22:10.290 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.290 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.290 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.549 { 00:22:10.549 "cntlid": 129, 00:22:10.549 "qid": 0, 00:22:10.549 "state": "enabled", 00:22:10.549 "listen_address": { 00:22:10.549 "trtype": "RDMA", 00:22:10.549 "adrfam": "IPv4", 00:22:10.549 "traddr": "192.168.100.8", 00:22:10.549 "trsvcid": "4420" 00:22:10.549 }, 00:22:10.549 "peer_address": { 00:22:10.549 "trtype": "RDMA", 00:22:10.549 "adrfam": "IPv4", 00:22:10.549 "traddr": "192.168.100.8", 00:22:10.549 "trsvcid": "52838" 00:22:10.549 }, 00:22:10.549 "auth": { 00:22:10.549 "state": "completed", 00:22:10.549 "digest": "sha512", 00:22:10.549 "dhgroup": "ffdhe6144" 00:22:10.549 } 00:22:10.549 } 00:22:10.549 ]' 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.549 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.809 21:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:22:11.378 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.638 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.206 00:22:12.206 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.206 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.206 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.206 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.206 21:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.206 21:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.206 21:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.206 21:09:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.206 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.206 { 00:22:12.206 "cntlid": 131, 00:22:12.206 "qid": 0, 00:22:12.206 "state": "enabled", 00:22:12.206 "listen_address": { 00:22:12.206 "trtype": "RDMA", 00:22:12.206 "adrfam": "IPv4", 00:22:12.206 "traddr": "192.168.100.8", 00:22:12.206 "trsvcid": "4420" 00:22:12.206 }, 00:22:12.206 "peer_address": { 00:22:12.206 "trtype": "RDMA", 00:22:12.206 "adrfam": "IPv4", 00:22:12.206 "traddr": "192.168.100.8", 00:22:12.206 "trsvcid": "50532" 00:22:12.206 }, 00:22:12.206 "auth": { 00:22:12.206 "state": "completed", 00:22:12.206 "digest": "sha512", 00:22:12.206 "dhgroup": "ffdhe6144" 00:22:12.206 } 00:22:12.206 } 00:22:12.206 ]' 00:22:12.206 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.206 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.206 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.206 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:12.206 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.473 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.473 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.473 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.473 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:22:13.042 21:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.300 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:13.300 21:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.300 21:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.300 21:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.300 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.300 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:13.300 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.560 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.819 00:22:13.819 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.819 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.819 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.079 { 00:22:14.079 "cntlid": 133, 00:22:14.079 "qid": 0, 00:22:14.079 "state": "enabled", 00:22:14.079 "listen_address": { 00:22:14.079 "trtype": "RDMA", 00:22:14.079 "adrfam": "IPv4", 00:22:14.079 "traddr": "192.168.100.8", 00:22:14.079 "trsvcid": "4420" 00:22:14.079 }, 00:22:14.079 "peer_address": { 00:22:14.079 "trtype": "RDMA", 00:22:14.079 "adrfam": "IPv4", 00:22:14.079 "traddr": "192.168.100.8", 00:22:14.079 "trsvcid": "43732" 00:22:14.079 }, 00:22:14.079 "auth": { 00:22:14.079 "state": "completed", 00:22:14.079 "digest": "sha512", 00:22:14.079 "dhgroup": "ffdhe6144" 00:22:14.079 } 00:22:14.079 } 00:22:14.079 ]' 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.079 21:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.339 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:22:14.907 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.907 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:14.907 21:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.907 21:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.907 21:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.907 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.907 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.907 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.167 21:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.426 00:22:15.426 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.426 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.426 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.686 { 00:22:15.686 "cntlid": 135, 00:22:15.686 "qid": 0, 00:22:15.686 "state": "enabled", 00:22:15.686 "listen_address": { 00:22:15.686 "trtype": "RDMA", 00:22:15.686 "adrfam": "IPv4", 00:22:15.686 "traddr": "192.168.100.8", 00:22:15.686 "trsvcid": "4420" 00:22:15.686 }, 00:22:15.686 "peer_address": { 00:22:15.686 "trtype": "RDMA", 00:22:15.686 "adrfam": "IPv4", 00:22:15.686 "traddr": "192.168.100.8", 00:22:15.686 "trsvcid": "46427" 00:22:15.686 }, 00:22:15.686 "auth": { 00:22:15.686 "state": "completed", 00:22:15.686 "digest": "sha512", 00:22:15.686 "dhgroup": "ffdhe6144" 00:22:15.686 } 00:22:15.686 } 00:22:15.686 ]' 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.686 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.945 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.945 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.945 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.945 21:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:22:16.513 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.772 21:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.338 00:22:17.338 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.338 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.338 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.595 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.595 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.596 { 00:22:17.596 "cntlid": 137, 00:22:17.596 "qid": 0, 00:22:17.596 "state": "enabled", 00:22:17.596 "listen_address": { 00:22:17.596 "trtype": "RDMA", 00:22:17.596 "adrfam": "IPv4", 00:22:17.596 "traddr": "192.168.100.8", 00:22:17.596 "trsvcid": "4420" 00:22:17.596 }, 00:22:17.596 "peer_address": { 00:22:17.596 "trtype": "RDMA", 00:22:17.596 "adrfam": "IPv4", 00:22:17.596 "traddr": "192.168.100.8", 00:22:17.596 "trsvcid": "58249" 00:22:17.596 }, 00:22:17.596 "auth": { 00:22:17.596 "state": "completed", 00:22:17.596 "digest": "sha512", 00:22:17.596 "dhgroup": "ffdhe8192" 00:22:17.596 } 00:22:17.596 } 00:22:17.596 ]' 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.596 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.854 21:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:22:18.421 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.681 21:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.248 00:22:19.248 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.248 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.248 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.507 { 00:22:19.507 "cntlid": 139, 00:22:19.507 "qid": 0, 00:22:19.507 "state": "enabled", 00:22:19.507 "listen_address": { 00:22:19.507 "trtype": "RDMA", 00:22:19.507 "adrfam": "IPv4", 00:22:19.507 "traddr": "192.168.100.8", 00:22:19.507 "trsvcid": "4420" 00:22:19.507 }, 00:22:19.507 "peer_address": { 00:22:19.507 "trtype": "RDMA", 00:22:19.507 "adrfam": "IPv4", 00:22:19.507 "traddr": "192.168.100.8", 00:22:19.507 "trsvcid": "41847" 00:22:19.507 }, 00:22:19.507 "auth": { 00:22:19.507 "state": "completed", 00:22:19.507 "digest": "sha512", 00:22:19.507 "dhgroup": "ffdhe8192" 00:22:19.507 } 00:22:19.507 } 00:22:19.507 ]' 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.507 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.766 21:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NmFiNDVkYjNjYjkyOWJlYTJjMjBkY2UyZGQ4NGYxNzRs3Ae2: --dhchap-ctrl-secret DHHC-1:02:M2YxMjA0OWVhMDJlMTBmZjU3YzhlYWNkNDNjNjlmOTIzMzZiNThkN2RiZDczZWQzCOt9cw==: 00:22:20.333 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.592 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.159 00:22:21.159 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.159 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.159 21:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.418 { 00:22:21.418 "cntlid": 141, 00:22:21.418 "qid": 0, 00:22:21.418 "state": "enabled", 00:22:21.418 "listen_address": { 00:22:21.418 "trtype": "RDMA", 00:22:21.418 "adrfam": "IPv4", 00:22:21.418 "traddr": "192.168.100.8", 00:22:21.418 "trsvcid": "4420" 00:22:21.418 }, 00:22:21.418 "peer_address": { 00:22:21.418 "trtype": "RDMA", 00:22:21.418 "adrfam": "IPv4", 00:22:21.418 "traddr": "192.168.100.8", 00:22:21.418 "trsvcid": "37979" 00:22:21.418 }, 00:22:21.418 "auth": { 00:22:21.418 "state": "completed", 00:22:21.418 "digest": "sha512", 00:22:21.418 "dhgroup": "ffdhe8192" 00:22:21.418 } 00:22:21.418 } 00:22:21.418 ]' 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.418 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.676 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:YmE4YmY2Nzk3ZGIyMTlhYTEyMzMwODc5NDg1ZTEyZDE0YjMwZTlmMjQ5N2VjNGEyXK8vvw==: --dhchap-ctrl-secret DHHC-1:01:YWUwZDg2NTA1M2NhMzgzNTc1MjFjM2YxODRkNTkxZTBIcjkq: 00:22:22.242 21:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.242 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:22.242 21:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.242 21:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.242 21:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.242 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.242 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.242 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:22.508 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.105 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.105 { 00:22:23.105 "cntlid": 143, 00:22:23.105 "qid": 0, 00:22:23.105 "state": "enabled", 00:22:23.105 "listen_address": { 00:22:23.105 "trtype": "RDMA", 00:22:23.105 "adrfam": "IPv4", 00:22:23.105 "traddr": "192.168.100.8", 00:22:23.105 "trsvcid": "4420" 00:22:23.105 }, 00:22:23.105 "peer_address": { 00:22:23.105 "trtype": "RDMA", 00:22:23.105 "adrfam": "IPv4", 00:22:23.105 "traddr": "192.168.100.8", 00:22:23.105 "trsvcid": "39568" 00:22:23.105 }, 00:22:23.105 "auth": { 00:22:23.105 "state": "completed", 00:22:23.105 "digest": "sha512", 00:22:23.105 "dhgroup": "ffdhe8192" 00:22:23.105 } 00:22:23.105 } 00:22:23.105 ]' 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.105 21:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.363 21:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.363 21:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.363 21:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.363 21:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.363 21:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.621 21:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:22:24.186 21:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.186 21:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:24.186 21:09:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.186 21:09:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.186 21:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.186 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:24.186 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:24.186 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:24.186 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:24.186 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:24.186 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.443 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.009 00:22:25.009 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.009 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.009 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.009 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.009 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.009 21:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.009 21:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.009 21:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.009 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.009 { 00:22:25.009 "cntlid": 145, 00:22:25.009 "qid": 0, 00:22:25.009 "state": "enabled", 00:22:25.009 "listen_address": { 00:22:25.009 "trtype": "RDMA", 00:22:25.009 "adrfam": "IPv4", 00:22:25.009 "traddr": "192.168.100.8", 00:22:25.009 "trsvcid": "4420" 00:22:25.009 }, 00:22:25.009 "peer_address": { 00:22:25.009 "trtype": "RDMA", 00:22:25.009 "adrfam": "IPv4", 00:22:25.009 "traddr": "192.168.100.8", 00:22:25.009 "trsvcid": "57923" 00:22:25.009 }, 00:22:25.009 "auth": { 00:22:25.009 "state": "completed", 00:22:25.009 "digest": "sha512", 00:22:25.009 "dhgroup": "ffdhe8192" 00:22:25.010 } 00:22:25.010 } 00:22:25.010 ]' 00:22:25.010 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.010 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.010 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.268 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.268 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.268 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.268 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.268 21:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.527 21:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:M2NjODU0OTFmNDQwODMzODZhZDUzZmExMzc3MmIwOWI2MjVmZTZjNjAxZTlmOGRmbHl9sA==: --dhchap-ctrl-secret DHHC-1:03:MWYxMTFkNmVmZGVmNGQxODRjZmU2ZDg0OTIzNTJmZmI4NDhhYTUwZjE3NDk5YjdkMWQ0ODBkYjI3ZDQ2YzIxNmLFCeo=: 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.093 21:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:58.176 request: 00:22:58.176 { 00:22:58.176 "name": "nvme0", 00:22:58.176 "trtype": "rdma", 00:22:58.176 "traddr": "192.168.100.8", 00:22:58.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:58.176 "adrfam": "ipv4", 00:22:58.176 "trsvcid": "4420", 00:22:58.176 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:58.176 "dhchap_key": "key2", 00:22:58.176 "method": "bdev_nvme_attach_controller", 00:22:58.176 "req_id": 1 00:22:58.176 } 00:22:58.176 Got JSON-RPC error response 00:22:58.176 response: 00:22:58.176 { 00:22:58.176 "code": -5, 00:22:58.176 "message": "Input/output error" 00:22:58.176 } 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:58.176 request: 00:22:58.176 { 00:22:58.176 "name": "nvme0", 00:22:58.176 "trtype": "rdma", 00:22:58.176 "traddr": "192.168.100.8", 00:22:58.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:58.176 "adrfam": "ipv4", 00:22:58.176 "trsvcid": "4420", 00:22:58.176 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:58.176 "dhchap_key": "key1", 00:22:58.176 "dhchap_ctrlr_key": "ckey2", 00:22:58.176 "method": "bdev_nvme_attach_controller", 00:22:58.176 "req_id": 1 00:22:58.176 } 00:22:58.176 Got JSON-RPC error response 00:22:58.176 response: 00:22:58.176 { 00:22:58.176 "code": -5, 00:22:58.176 "message": "Input/output error" 00:22:58.176 } 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.176 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.177 21:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.257 request: 00:23:30.257 { 00:23:30.257 "name": "nvme0", 00:23:30.257 "trtype": "rdma", 00:23:30.257 "traddr": "192.168.100.8", 00:23:30.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:30.257 "adrfam": "ipv4", 00:23:30.257 "trsvcid": "4420", 00:23:30.257 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:30.257 "dhchap_key": "key1", 00:23:30.257 "dhchap_ctrlr_key": "ckey1", 00:23:30.257 "method": "bdev_nvme_attach_controller", 00:23:30.257 "req_id": 1 00:23:30.257 } 00:23:30.257 Got JSON-RPC error response 00:23:30.257 response: 00:23:30.257 { 00:23:30.257 "code": -5, 00:23:30.257 "message": "Input/output error" 00:23:30.257 } 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3566165 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3566165 ']' 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3566165 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3566165 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3566165' 00:23:30.257 killing process with pid 3566165 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3566165 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3566165 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3599338 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3599338 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3599338 ']' 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3599338 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3599338 ']' 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.257 21:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:30.257 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:30.257 { 00:23:30.257 "cntlid": 1, 00:23:30.257 "qid": 0, 00:23:30.257 "state": "enabled", 00:23:30.257 "listen_address": { 00:23:30.257 "trtype": "RDMA", 00:23:30.257 "adrfam": "IPv4", 00:23:30.257 "traddr": "192.168.100.8", 00:23:30.257 "trsvcid": "4420" 00:23:30.257 }, 00:23:30.257 "peer_address": { 00:23:30.257 "trtype": "RDMA", 00:23:30.257 "adrfam": "IPv4", 00:23:30.257 "traddr": "192.168.100.8", 00:23:30.257 "trsvcid": "36911" 00:23:30.257 }, 00:23:30.257 "auth": { 00:23:30.257 "state": "completed", 00:23:30.257 "digest": "sha512", 00:23:30.257 "dhgroup": "ffdhe8192" 00:23:30.257 } 00:23:30.257 } 00:23:30.257 ]' 00:23:30.257 21:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:30.257 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:30.257 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:30.257 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:30.257 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:30.257 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.257 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.257 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.257 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:N2Y4NjA5NDU1NzY1Y2M4ZTg4YzljMDFiMDM2MGM3YmY4MzkxZDg1NmZiOWU1M2MxZGM4N2U5MTBjNGQ3NjBhYW5231c=: 00:23:30.257 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:30.258 21:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:30.517 21:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:30.517 21:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:30.517 21:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:30.517 21:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:30.517 21:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.517 21:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:30.517 21:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.517 21:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:30.517 21:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:02.686 request: 00:24:02.686 { 00:24:02.686 "name": "nvme0", 00:24:02.686 "trtype": "rdma", 00:24:02.686 "traddr": "192.168.100.8", 00:24:02.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:02.686 "adrfam": "ipv4", 00:24:02.686 "trsvcid": "4420", 00:24:02.686 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:02.686 "dhchap_key": "key3", 00:24:02.686 "method": "bdev_nvme_attach_controller", 00:24:02.686 "req_id": 1 00:24:02.686 } 00:24:02.686 Got JSON-RPC error response 00:24:02.686 response: 00:24:02.686 { 00:24:02.686 "code": -5, 00:24:02.686 "message": "Input/output error" 00:24:02.686 } 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:02.686 21:10:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:34.771 request: 00:24:34.771 { 00:24:34.771 "name": "nvme0", 00:24:34.771 "trtype": "rdma", 00:24:34.771 "traddr": "192.168.100.8", 00:24:34.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:34.771 "adrfam": "ipv4", 00:24:34.771 "trsvcid": "4420", 00:24:34.771 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:34.771 "dhchap_key": "key3", 00:24:34.771 "method": "bdev_nvme_attach_controller", 00:24:34.771 "req_id": 1 00:24:34.771 } 00:24:34.771 Got JSON-RPC error response 00:24:34.771 response: 00:24:34.771 { 00:24:34.771 "code": -5, 00:24:34.771 "message": "Input/output error" 00:24:34.771 } 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:34.771 21:11:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:34.771 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:34.772 request: 00:24:34.772 { 00:24:34.772 "name": "nvme0", 00:24:34.772 "trtype": "rdma", 00:24:34.772 "traddr": "192.168.100.8", 00:24:34.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:34.772 "adrfam": "ipv4", 00:24:34.772 "trsvcid": "4420", 00:24:34.772 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:34.772 "dhchap_key": "key0", 00:24:34.772 "dhchap_ctrlr_key": "key1", 00:24:34.772 "method": "bdev_nvme_attach_controller", 00:24:34.772 "req_id": 1 00:24:34.772 } 00:24:34.772 Got JSON-RPC error response 00:24:34.772 response: 00:24:34.772 { 00:24:34.772 "code": -5, 00:24:34.772 "message": "Input/output error" 00:24:34.772 } 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:34.772 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3566184 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3566184 ']' 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3566184 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3566184 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:34.772 21:11:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3566184' 00:24:34.772 killing process with pid 3566184 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3566184 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3566184 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:34.772 rmmod nvme_rdma 00:24:34.772 rmmod nvme_fabrics 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3599338 ']' 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3599338 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3599338 ']' 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3599338 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3599338 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3599338' 00:24:34.772 killing process with pid 3599338 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3599338 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3599338 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.U9B /tmp/spdk.key-sha256.ehf /tmp/spdk.key-sha384.CcR /tmp/spdk.key-sha512.Phh /tmp/spdk.key-sha512.Krv /tmp/spdk.key-sha384.M7R /tmp/spdk.key-sha256.yGT '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:24:34.772 00:24:34.772 real 4m20.431s 00:24:34.772 user 9m18.901s 00:24:34.772 sys 0m22.848s 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:34.772 21:11:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.772 ************************************ 00:24:34.772 END TEST nvmf_auth_target 00:24:34.772 ************************************ 00:24:34.772 21:11:23 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:24:34.772 21:11:23 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:34.772 21:11:23 nvmf_rdma -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:24:34.772 21:11:23 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:34.772 21:11:23 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:34.772 21:11:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:34.772 ************************************ 00:24:34.772 START TEST nvmf_fuzz 00:24:34.772 ************************************ 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:24:34.772 * Looking for test storage... 00:24:34.772 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.772 21:11:23 nvmf_rdma.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:34.773 21:11:23 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:40.049 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:40.050 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:40.050 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:40.050 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:40.050 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@420 -- # rdma_device_init 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # uname 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:40.050 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:40.050 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:40.050 altname enp217s0f0np0 00:24:40.050 altname ens818f0np0 00:24:40.050 inet 192.168.100.8/24 scope global mlx_0_0 00:24:40.050 valid_lft forever preferred_lft forever 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:40.050 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:40.050 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:40.050 altname enp217s0f1np1 00:24:40.050 altname ens818f1np1 00:24:40.050 inet 192.168.100.9/24 scope global mlx_0_1 00:24:40.050 valid_lft forever preferred_lft forever 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:40.050 192.168.100.9' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:40.050 192.168.100.9' 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # head -n 1 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:40.050 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:40.050 192.168.100.9' 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # tail -n +2 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # head -n 1 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3613180 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3613180 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3613180 ']' 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:40.051 21:11:30 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.621 Malloc0 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:24:40.621 21:11:31 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:25:12.711 Fuzzing completed. Shutting down the fuzz application 00:25:12.711 00:25:12.711 Dumping successful admin opcodes: 00:25:12.711 8, 9, 10, 24, 00:25:12.711 Dumping successful io opcodes: 00:25:12.711 0, 9, 00:25:12.711 NS: 0x200003af1f00 I/O qp, Total commands completed: 986558, total successful commands: 5781, random_seed: 787875264 00:25:12.711 NS: 0x200003af1f00 admin qp, Total commands completed: 127088, total successful commands: 1038, random_seed: 551034816 00:25:12.711 21:12:01 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:12.711 Fuzzing completed. Shutting down the fuzz application 00:25:12.711 00:25:12.711 Dumping successful admin opcodes: 00:25:12.711 24, 00:25:12.711 Dumping successful io opcodes: 00:25:12.711 00:25:12.711 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4220137936 00:25:12.711 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 4220220108 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:12.711 rmmod nvme_rdma 00:25:12.711 rmmod nvme_fabrics 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:12.711 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3613180 ']' 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3613180 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3613180 ']' 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 3613180 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3613180 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3613180' 00:25:12.712 killing process with pid 3613180 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 3613180 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 3613180 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:12.712 00:25:12.712 real 0m39.729s 00:25:12.712 user 0m50.034s 00:25:12.712 sys 0m20.929s 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:12.712 21:12:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.712 ************************************ 00:25:12.712 END TEST nvmf_fuzz 00:25:12.712 ************************************ 00:25:12.712 21:12:03 nvmf_rdma -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:25:12.712 21:12:03 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:12.712 21:12:03 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:12.712 21:12:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:12.712 ************************************ 00:25:12.712 START TEST nvmf_multiconnection 00:25:12.712 ************************************ 00:25:12.712 21:12:03 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:25:12.712 * Looking for test storage... 00:25:12.712 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:12.712 21:12:03 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.712 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:12.712 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.712 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.712 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.712 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.712 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:12.972 21:12:03 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:19.590 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:19.590 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:19.590 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:19.590 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@420 -- # rdma_device_init 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # uname 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:19.590 21:12:09 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:19.590 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:19.590 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:19.590 altname enp217s0f0np0 00:25:19.590 altname ens818f0np0 00:25:19.590 inet 192.168.100.8/24 scope global mlx_0_0 00:25:19.590 valid_lft forever preferred_lft forever 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:19.590 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:19.590 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:19.590 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:19.590 altname enp217s0f1np1 00:25:19.590 altname ens818f1np1 00:25:19.590 inet 192.168.100.9/24 scope global mlx_0_1 00:25:19.590 valid_lft forever preferred_lft forever 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:19.591 192.168.100.9' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:19.591 192.168.100.9' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # head -n 1 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:19.591 192.168.100.9' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # head -n 1 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # tail -n +2 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3622446 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3622446 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 3622446 ']' 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:19.591 21:12:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.591 [2024-07-13 21:12:10.234869] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:19.591 [2024-07-13 21:12:10.234919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.591 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.591 [2024-07-13 21:12:10.307820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:19.591 [2024-07-13 21:12:10.349519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.591 [2024-07-13 21:12:10.349563] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.591 [2024-07-13 21:12:10.349572] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.591 [2024-07-13 21:12:10.349582] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.591 [2024-07-13 21:12:10.349589] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.591 [2024-07-13 21:12:10.349640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.591 [2024-07-13 21:12:10.349736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.591 [2024-07-13 21:12:10.349821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.591 [2024-07-13 21:12:10.349823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.160 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:20.161 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:25:20.161 21:12:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:20.161 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 [2024-07-13 21:12:11.123561] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18bec80/0x18c3170) succeed. 00:25:20.420 [2024-07-13 21:12:11.133915] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18c02c0/0x1904800) succeed. 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 Malloc1 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.420 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.420 [2024-07-13 21:12:11.307700] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 Malloc2 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 Malloc3 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 Malloc4 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.680 Malloc5 00:25:20.680 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.681 Malloc6 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.681 Malloc7 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.681 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 Malloc8 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 Malloc9 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 Malloc10 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 Malloc11 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.941 21:12:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:21.877 21:12:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:21.877 21:12:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:21.877 21:12:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.877 21:12:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:21.877 21:12:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:24.411 21:12:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:24.411 21:12:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:24.411 21:12:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:25:24.411 21:12:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:24.411 21:12:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.411 21:12:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:24.411 21:12:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.411 21:12:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:25:24.978 21:12:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:24.978 21:12:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:24.978 21:12:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.978 21:12:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:24.978 21:12:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:26.883 21:12:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:26.883 21:12:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:26.883 21:12:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:25:26.883 21:12:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:26.883 21:12:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.883 21:12:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:26.883 21:12:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.883 21:12:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:25:27.820 21:12:18 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:27.820 21:12:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:27.820 21:12:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.079 21:12:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:28.079 21:12:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:29.986 21:12:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:29.986 21:12:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:29.986 21:12:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:25:29.986 21:12:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:29.986 21:12:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.986 21:12:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:29.986 21:12:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.986 21:12:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:25:30.922 21:12:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:30.922 21:12:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:30.922 21:12:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.922 21:12:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:30.922 21:12:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:32.826 21:12:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:33.085 21:12:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:33.085 21:12:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:25:33.085 21:12:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:33.085 21:12:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.085 21:12:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:33.085 21:12:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.085 21:12:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:25:34.022 21:12:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:34.022 21:12:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:34.022 21:12:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.022 21:12:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:34.022 21:12:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:35.927 21:12:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:35.927 21:12:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:35.927 21:12:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:25:35.927 21:12:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:35.927 21:12:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.927 21:12:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:35.927 21:12:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.927 21:12:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:25:36.863 21:12:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:36.863 21:12:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:36.863 21:12:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.863 21:12:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:36.863 21:12:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:39.400 21:12:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:39.400 21:12:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:39.400 21:12:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:25:39.400 21:12:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:39.400 21:12:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.400 21:12:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:39.400 21:12:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.400 21:12:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:25:39.968 21:12:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:39.968 21:12:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:39.968 21:12:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.968 21:12:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:39.968 21:12:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:41.942 21:12:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:41.942 21:12:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:41.942 21:12:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:41.942 21:12:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:41.942 21:12:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.942 21:12:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:41.942 21:12:32 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.942 21:12:32 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:25:42.882 21:12:33 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:42.882 21:12:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:42.882 21:12:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.882 21:12:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:42.882 21:12:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:45.415 21:12:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:45.415 21:12:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:45.415 21:12:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:45.415 21:12:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:45.415 21:12:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:45.415 21:12:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:45.415 21:12:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.415 21:12:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:25:45.983 21:12:36 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:45.983 21:12:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:45.983 21:12:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.983 21:12:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:45.983 21:12:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:47.887 21:12:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:47.887 21:12:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:47.887 21:12:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:48.144 21:12:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:48.144 21:12:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:48.144 21:12:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:48.144 21:12:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.144 21:12:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:25:49.076 21:12:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:49.076 21:12:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:49.076 21:12:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:49.076 21:12:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:49.076 21:12:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:50.977 21:12:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:50.977 21:12:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:50.977 21:12:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:50.977 21:12:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:50.977 21:12:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.977 21:12:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:50.977 21:12:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.977 21:12:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:25:51.912 21:12:42 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:51.912 21:12:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:51.912 21:12:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.912 21:12:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:51.912 21:12:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:54.451 21:12:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:54.451 21:12:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:54.451 21:12:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:54.451 21:12:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:54.451 21:12:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.451 21:12:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:54.451 21:12:44 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:54.451 [global] 00:25:54.451 thread=1 00:25:54.451 invalidate=1 00:25:54.451 rw=read 00:25:54.451 time_based=1 00:25:54.451 runtime=10 00:25:54.451 ioengine=libaio 00:25:54.451 direct=1 00:25:54.451 bs=262144 00:25:54.451 iodepth=64 00:25:54.451 norandommap=1 00:25:54.451 numjobs=1 00:25:54.451 00:25:54.451 [job0] 00:25:54.451 filename=/dev/nvme0n1 00:25:54.451 [job1] 00:25:54.451 filename=/dev/nvme10n1 00:25:54.451 [job2] 00:25:54.451 filename=/dev/nvme1n1 00:25:54.451 [job3] 00:25:54.451 filename=/dev/nvme2n1 00:25:54.451 [job4] 00:25:54.451 filename=/dev/nvme3n1 00:25:54.451 [job5] 00:25:54.451 filename=/dev/nvme4n1 00:25:54.451 [job6] 00:25:54.451 filename=/dev/nvme5n1 00:25:54.451 [job7] 00:25:54.451 filename=/dev/nvme6n1 00:25:54.451 [job8] 00:25:54.451 filename=/dev/nvme7n1 00:25:54.451 [job9] 00:25:54.451 filename=/dev/nvme8n1 00:25:54.451 [job10] 00:25:54.451 filename=/dev/nvme9n1 00:25:54.451 Could not set queue depth (nvme0n1) 00:25:54.452 Could not set queue depth (nvme10n1) 00:25:54.452 Could not set queue depth (nvme1n1) 00:25:54.452 Could not set queue depth (nvme2n1) 00:25:54.452 Could not set queue depth (nvme3n1) 00:25:54.452 Could not set queue depth (nvme4n1) 00:25:54.452 Could not set queue depth (nvme5n1) 00:25:54.452 Could not set queue depth (nvme6n1) 00:25:54.452 Could not set queue depth (nvme7n1) 00:25:54.452 Could not set queue depth (nvme8n1) 00:25:54.452 Could not set queue depth (nvme9n1) 00:25:54.452 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:54.452 fio-3.35 00:25:54.452 Starting 11 threads 00:26:06.661 00:26:06.661 job0: (groupid=0, jobs=1): err= 0: pid=3628674: Sat Jul 13 21:12:55 2024 00:26:06.661 read: IOPS=1937, BW=484MiB/s (508MB/s)(4859MiB/10028msec) 00:26:06.661 slat (usec): min=10, max=10631, avg=511.54, stdev=1156.44 00:26:06.661 clat (usec): min=10531, max=59277, avg=32475.75, stdev=3755.52 00:26:06.661 lat (usec): min=10787, max=63332, avg=32987.29, stdev=3871.93 00:26:06.661 clat percentiles (usec): 00:26:06.661 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30540], 00:26:06.661 | 30.00th=[30802], 40.00th=[31327], 50.00th=[31851], 60.00th=[32113], 00:26:06.661 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[36963], 00:26:06.661 | 99.00th=[50594], 99.50th=[52691], 99.90th=[56361], 99.95th=[56886], 00:26:06.661 | 99.99th=[59507] 00:26:06.661 bw ( KiB/s): min=339968, max=518144, per=12.46%, avg=495897.60, stdev=38889.82, samples=20 00:26:06.661 iops : min= 1328, max= 2024, avg=1937.10, stdev=151.91, samples=20 00:26:06.661 lat (msec) : 20=0.20%, 50=98.61%, 100=1.19% 00:26:06.661 cpu : usr=0.53%, sys=5.42%, ctx=3813, majf=0, minf=3221 00:26:06.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:06.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.661 issued rwts: total=19434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.661 job1: (groupid=0, jobs=1): err= 0: pid=3628675: Sat Jul 13 21:12:55 2024 00:26:06.661 read: IOPS=797, BW=199MiB/s (209MB/s)(2004MiB/10055msec) 00:26:06.661 slat (usec): min=14, max=31220, avg=1244.10, stdev=3345.04 00:26:06.661 clat (msec): min=12, max=138, avg=78.95, stdev= 8.12 00:26:06.661 lat (msec): min=12, max=138, avg=80.20, stdev= 8.75 00:26:06.661 clat percentiles (msec): 00:26:06.661 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 66], 20.00th=[ 79], 00:26:06.661 | 30.00th=[ 80], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:26:06.661 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 88], 00:26:06.661 | 99.00th=[ 99], 99.50th=[ 105], 99.90th=[ 128], 99.95th=[ 128], 00:26:06.661 | 99.99th=[ 138] 00:26:06.661 bw ( KiB/s): min=188928, max=254464, per=5.11%, avg=203591.80, stdev=15014.10, samples=20 00:26:06.661 iops : min= 738, max= 994, avg=795.25, stdev=58.64, samples=20 00:26:06.661 lat (msec) : 20=0.29%, 50=0.34%, 100=98.53%, 250=0.85% 00:26:06.661 cpu : usr=0.32%, sys=3.54%, ctx=1580, majf=0, minf=4097 00:26:06.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:06.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.661 issued rwts: total=8015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.661 job2: (groupid=0, jobs=1): err= 0: pid=3628677: Sat Jul 13 21:12:55 2024 00:26:06.661 read: IOPS=1937, BW=484MiB/s (508MB/s)(4857MiB/10028msec) 00:26:06.661 slat (usec): min=12, max=9760, avg=511.26, stdev=1155.21 00:26:06.661 clat (usec): min=11005, max=63460, avg=32489.14, stdev=3751.27 00:26:06.662 lat (usec): min=11402, max=63504, avg=33000.40, stdev=3875.79 00:26:06.662 clat percentiles (usec): 00:26:06.662 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30540], 00:26:06.662 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:26:06.662 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[36963], 00:26:06.662 | 99.00th=[50070], 99.50th=[52691], 99.90th=[56886], 99.95th=[57934], 00:26:06.662 | 99.99th=[61080] 00:26:06.662 bw ( KiB/s): min=344064, max=514560, per=12.45%, avg=495744.00, stdev=38196.23, samples=20 00:26:06.662 iops : min= 1344, max= 2010, avg=1936.50, stdev=149.20, samples=20 00:26:06.662 lat (msec) : 20=0.22%, 50=98.74%, 100=1.04% 00:26:06.662 cpu : usr=0.70%, sys=7.43%, ctx=3679, majf=0, minf=4097 00:26:06.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:06.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.662 issued rwts: total=19428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.662 job3: (groupid=0, jobs=1): err= 0: pid=3628681: Sat Jul 13 21:12:55 2024 00:26:06.662 read: IOPS=1860, BW=465MiB/s (488MB/s)(4666MiB/10030msec) 00:26:06.662 slat (usec): min=13, max=11941, avg=532.42, stdev=1212.15 00:26:06.662 clat (usec): min=10415, max=58685, avg=33824.66, stdev=4835.24 00:26:06.662 lat (usec): min=10652, max=60582, avg=34357.08, stdev=4972.42 00:26:06.662 clat percentiles (usec): 00:26:06.662 | 1.00th=[29492], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:26:06.662 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32637], 60.00th=[32900], 00:26:06.662 | 70.00th=[33424], 80.00th=[34341], 90.00th=[37487], 95.00th=[47973], 00:26:06.662 | 99.00th=[50594], 99.50th=[52167], 99.90th=[55837], 99.95th=[56886], 00:26:06.662 | 99.99th=[58459] 00:26:06.662 bw ( KiB/s): min=333824, max=506368, per=11.96%, avg=476227.60, stdev=46281.24, samples=20 00:26:06.662 iops : min= 1304, max= 1978, avg=1860.25, stdev=180.80, samples=20 00:26:06.662 lat (msec) : 20=0.24%, 50=98.36%, 100=1.40% 00:26:06.662 cpu : usr=0.56%, sys=7.45%, ctx=3540, majf=0, minf=4097 00:26:06.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:06.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.662 issued rwts: total=18664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.662 job4: (groupid=0, jobs=1): err= 0: pid=3628682: Sat Jul 13 21:12:55 2024 00:26:06.662 read: IOPS=798, BW=200MiB/s (209MB/s)(2006MiB/10054msec) 00:26:06.662 slat (usec): min=16, max=23061, avg=1241.71, stdev=3096.45 00:26:06.662 clat (msec): min=12, max=129, avg=78.85, stdev= 7.77 00:26:06.662 lat (msec): min=12, max=129, avg=80.10, stdev= 8.33 00:26:06.662 clat percentiles (msec): 00:26:06.662 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 66], 20.00th=[ 79], 00:26:06.662 | 30.00th=[ 80], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:26:06.662 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 88], 00:26:06.662 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 108], 99.95th=[ 108], 00:26:06.662 | 99.99th=[ 130] 00:26:06.662 bw ( KiB/s): min=194048, max=244736, per=5.12%, avg=203827.20, stdev=13703.11, samples=20 00:26:06.662 iops : min= 758, max= 956, avg=796.20, stdev=53.53, samples=20 00:26:06.662 lat (msec) : 20=0.25%, 50=0.32%, 100=99.12%, 250=0.31% 00:26:06.662 cpu : usr=0.41%, sys=4.07%, ctx=1617, majf=0, minf=4097 00:26:06.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:06.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.662 issued rwts: total=8025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.662 job5: (groupid=0, jobs=1): err= 0: pid=3628683: Sat Jul 13 21:12:55 2024 00:26:06.662 read: IOPS=1903, BW=476MiB/s (499MB/s)(4773MiB/10028msec) 00:26:06.662 slat (usec): min=12, max=10603, avg=516.42, stdev=1204.38 00:26:06.662 clat (usec): min=12697, max=90013, avg=33070.40, stdev=6884.49 00:26:06.662 lat (usec): min=12931, max=94686, avg=33586.82, stdev=7039.70 00:26:06.662 clat percentiles (usec): 00:26:06.662 | 1.00th=[29754], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:26:06.662 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:26:06.662 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[35390], 00:26:06.662 | 99.00th=[78119], 99.50th=[81265], 99.90th=[85459], 99.95th=[86508], 00:26:06.662 | 99.99th=[88605] 00:26:06.662 bw ( KiB/s): min=205210, max=507392, per=12.24%, avg=487111.70, stdev=66600.01, samples=20 00:26:06.662 iops : min= 801, max= 1982, avg=1902.75, stdev=260.29, samples=20 00:26:06.662 lat (msec) : 20=0.18%, 50=97.44%, 100=2.38% 00:26:06.662 cpu : usr=0.59%, sys=6.32%, ctx=4092, majf=0, minf=4097 00:26:06.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:06.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.662 issued rwts: total=19090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.662 job6: (groupid=0, jobs=1): err= 0: pid=3628684: Sat Jul 13 21:12:55 2024 00:26:06.662 read: IOPS=1726, BW=432MiB/s (452MB/s)(4339MiB/10054msec) 00:26:06.662 slat (usec): min=10, max=24152, avg=568.81, stdev=1447.10 00:26:06.662 clat (msec): min=11, max=133, avg=36.47, stdev=12.02 00:26:06.662 lat (msec): min=11, max=133, avg=37.04, stdev=12.24 00:26:06.662 clat percentiles (msec): 00:26:06.662 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:26:06.662 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:26:06.662 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 48], 95.00th=[ 72], 00:26:06.662 | 99.00th=[ 86], 99.50th=[ 87], 99.90th=[ 105], 99.95th=[ 124], 00:26:06.662 | 99.99th=[ 134] 00:26:06.662 bw ( KiB/s): min=193536, max=503296, per=11.12%, avg=442691.85, stdev=102851.43, samples=20 00:26:06.662 iops : min= 756, max= 1966, avg=1729.25, stdev=401.77, samples=20 00:26:06.662 lat (msec) : 20=0.33%, 50=92.91%, 100=6.62%, 250=0.14% 00:26:06.662 cpu : usr=0.47%, sys=5.36%, ctx=3458, majf=0, minf=4097 00:26:06.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:06.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.662 issued rwts: total=17354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.662 job7: (groupid=0, jobs=1): err= 0: pid=3628685: Sat Jul 13 21:12:55 2024 00:26:06.662 read: IOPS=829, BW=207MiB/s (217MB/s)(2084MiB/10055msec) 00:26:06.662 slat (usec): min=11, max=26896, avg=1190.39, stdev=3209.34 00:26:06.662 clat (msec): min=11, max=131, avg=75.91, stdev=12.42 00:26:06.662 lat (msec): min=11, max=131, avg=77.10, stdev=12.91 00:26:06.662 clat percentiles (msec): 00:26:06.662 | 1.00th=[ 43], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 68], 00:26:06.662 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:26:06.662 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 87], 00:26:06.662 | 99.00th=[ 101], 99.50th=[ 105], 99.90th=[ 122], 99.95th=[ 132], 00:26:06.662 | 99.99th=[ 132] 00:26:06.662 bw ( KiB/s): min=189952, max=341675, per=5.32%, avg=211848.55, stdev=34307.62, samples=20 00:26:06.662 iops : min= 742, max= 1334, avg=827.50, stdev=133.88, samples=20 00:26:06.662 lat (msec) : 20=0.49%, 50=7.35%, 100=91.16%, 250=1.00% 00:26:06.662 cpu : usr=0.27%, sys=3.14%, ctx=1755, majf=0, minf=4097 00:26:06.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:06.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.662 issued rwts: total=8337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.662 job8: (groupid=0, jobs=1): err= 0: pid=3628686: Sat Jul 13 21:12:55 2024 00:26:06.662 read: IOPS=2021, BW=505MiB/s (530MB/s)(5069MiB/10027msec) 00:26:06.662 slat (usec): min=11, max=7826, avg=488.88, stdev=1092.59 00:26:06.662 clat (usec): min=684, max=62769, avg=31133.82, stdev=4888.97 00:26:06.662 lat (usec): min=707, max=62784, avg=31622.70, stdev=5025.64 00:26:06.662 clat percentiles (usec): 00:26:06.662 | 1.00th=[13566], 5.00th=[16712], 10.00th=[30278], 20.00th=[31065], 00:26:06.662 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:26:06.662 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33817], 95.00th=[34866], 00:26:06.662 | 99.00th=[44303], 99.50th=[45876], 99.90th=[54264], 99.95th=[56361], 00:26:06.662 | 99.99th=[57934] 00:26:06.662 bw ( KiB/s): min=487936, max=721920, per=13.00%, avg=517401.60, stdev=50088.34, samples=20 00:26:06.662 iops : min= 1906, max= 2820, avg=2021.10, stdev=195.66, samples=20 00:26:06.662 lat (usec) : 750=0.01%, 1000=0.01% 00:26:06.662 lat (msec) : 2=0.05%, 4=0.10%, 10=0.46%, 20=5.63%, 50=93.61% 00:26:06.662 lat (msec) : 100=0.11% 00:26:06.662 cpu : usr=0.32%, sys=5.72%, ctx=4343, majf=0, minf=4097 00:26:06.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:06.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.662 issued rwts: total=20274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.662 job9: (groupid=0, jobs=1): err= 0: pid=3628687: Sat Jul 13 21:12:55 2024 00:26:06.662 read: IOPS=796, BW=199MiB/s (209MB/s)(2003MiB/10052msec) 00:26:06.662 slat (usec): min=14, max=33888, avg=1244.22, stdev=3311.90 00:26:06.662 clat (msec): min=12, max=127, avg=79.00, stdev= 8.00 00:26:06.662 lat (msec): min=13, max=127, avg=80.24, stdev= 8.61 00:26:06.662 clat percentiles (msec): 00:26:06.662 | 1.00th=[ 63], 5.00th=[ 64], 10.00th=[ 66], 20.00th=[ 79], 00:26:06.662 | 30.00th=[ 80], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 81], 00:26:06.662 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 87], 00:26:06.662 | 99.00th=[ 97], 99.50th=[ 110], 99.90th=[ 126], 99.95th=[ 126], 00:26:06.662 | 99.99th=[ 128] 00:26:06.662 bw ( KiB/s): min=189952, max=249344, per=5.11%, avg=203463.50, stdev=14724.84, samples=20 00:26:06.662 iops : min= 742, max= 974, avg=794.75, stdev=57.52, samples=20 00:26:06.662 lat (msec) : 20=0.22%, 50=0.36%, 100=98.65%, 250=0.76% 00:26:06.662 cpu : usr=0.47%, sys=3.83%, ctx=1605, majf=0, minf=4097 00:26:06.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:06.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.662 issued rwts: total=8010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.662 job10: (groupid=0, jobs=1): err= 0: pid=3628688: Sat Jul 13 21:12:55 2024 00:26:06.662 read: IOPS=969, BW=242MiB/s (254MB/s)(2431MiB/10029msec) 00:26:06.662 slat (usec): min=11, max=39010, avg=1012.21, stdev=3013.79 00:26:06.663 clat (msec): min=12, max=119, avg=64.94, stdev=21.47 00:26:06.663 lat (msec): min=12, max=120, avg=65.95, stdev=21.97 00:26:06.663 clat percentiles (msec): 00:26:06.663 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 35], 00:26:06.663 | 30.00th=[ 48], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 81], 00:26:06.663 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 84], 95.00th=[ 86], 00:26:06.663 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 114], 99.95th=[ 117], 00:26:06.663 | 99.99th=[ 120] 00:26:06.663 bw ( KiB/s): min=192000, max=494592, per=6.21%, avg=247291.10, stdev=97206.07, samples=20 00:26:06.663 iops : min= 750, max= 1932, avg=965.95, stdev=379.72, samples=20 00:26:06.663 lat (msec) : 20=0.23%, 50=35.30%, 100=64.16%, 250=0.31% 00:26:06.663 cpu : usr=0.44%, sys=4.38%, ctx=1993, majf=0, minf=4097 00:26:06.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:06.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:06.663 issued rwts: total=9722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.663 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:06.663 00:26:06.663 Run status group 0 (all jobs): 00:26:06.663 READ: bw=3887MiB/s (4076MB/s), 199MiB/s-505MiB/s (209MB/s-530MB/s), io=38.2GiB (41.0GB), run=10027-10055msec 00:26:06.663 00:26:06.663 Disk stats (read/write): 00:26:06.663 nvme0n1: ios=38345/0, merge=0/0, ticks=1217295/0, in_queue=1217295, util=96.81% 00:26:06.663 nvme10n1: ios=15749/0, merge=0/0, ticks=1221891/0, in_queue=1221891, util=97.03% 00:26:06.663 nvme1n1: ios=38319/0, merge=0/0, ticks=1220612/0, in_queue=1220612, util=97.37% 00:26:06.663 nvme2n1: ios=36803/0, merge=0/0, ticks=1220955/0, in_queue=1220955, util=97.56% 00:26:06.663 nvme3n1: ios=15752/0, merge=0/0, ticks=1222318/0, in_queue=1222318, util=97.63% 00:26:06.663 nvme4n1: ios=37655/0, merge=0/0, ticks=1217353/0, in_queue=1217353, util=98.04% 00:26:06.663 nvme5n1: ios=34409/0, merge=0/0, ticks=1214967/0, in_queue=1214967, util=98.24% 00:26:06.663 nvme6n1: ios=16379/0, merge=0/0, ticks=1220268/0, in_queue=1220268, util=98.38% 00:26:06.663 nvme7n1: ios=40020/0, merge=0/0, ticks=1216662/0, in_queue=1216662, util=98.81% 00:26:06.663 nvme8n1: ios=15768/0, merge=0/0, ticks=1224025/0, in_queue=1224025, util=99.04% 00:26:06.663 nvme9n1: ios=18921/0, merge=0/0, ticks=1225450/0, in_queue=1225450, util=99.22% 00:26:06.663 21:12:55 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:06.663 [global] 00:26:06.663 thread=1 00:26:06.663 invalidate=1 00:26:06.663 rw=randwrite 00:26:06.663 time_based=1 00:26:06.663 runtime=10 00:26:06.663 ioengine=libaio 00:26:06.663 direct=1 00:26:06.663 bs=262144 00:26:06.663 iodepth=64 00:26:06.663 norandommap=1 00:26:06.663 numjobs=1 00:26:06.663 00:26:06.663 [job0] 00:26:06.663 filename=/dev/nvme0n1 00:26:06.663 [job1] 00:26:06.663 filename=/dev/nvme10n1 00:26:06.663 [job2] 00:26:06.663 filename=/dev/nvme1n1 00:26:06.663 [job3] 00:26:06.663 filename=/dev/nvme2n1 00:26:06.663 [job4] 00:26:06.663 filename=/dev/nvme3n1 00:26:06.663 [job5] 00:26:06.663 filename=/dev/nvme4n1 00:26:06.663 [job6] 00:26:06.663 filename=/dev/nvme5n1 00:26:06.663 [job7] 00:26:06.663 filename=/dev/nvme6n1 00:26:06.663 [job8] 00:26:06.663 filename=/dev/nvme7n1 00:26:06.663 [job9] 00:26:06.663 filename=/dev/nvme8n1 00:26:06.663 [job10] 00:26:06.663 filename=/dev/nvme9n1 00:26:06.663 Could not set queue depth (nvme0n1) 00:26:06.663 Could not set queue depth (nvme10n1) 00:26:06.663 Could not set queue depth (nvme1n1) 00:26:06.663 Could not set queue depth (nvme2n1) 00:26:06.663 Could not set queue depth (nvme3n1) 00:26:06.663 Could not set queue depth (nvme4n1) 00:26:06.663 Could not set queue depth (nvme5n1) 00:26:06.663 Could not set queue depth (nvme6n1) 00:26:06.663 Could not set queue depth (nvme7n1) 00:26:06.663 Could not set queue depth (nvme8n1) 00:26:06.663 Could not set queue depth (nvme9n1) 00:26:06.663 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:06.663 fio-3.35 00:26:06.663 Starting 11 threads 00:26:16.644 00:26:16.644 job0: (groupid=0, jobs=1): err= 0: pid=3630412: Sat Jul 13 21:13:06 2024 00:26:16.644 write: IOPS=1445, BW=361MiB/s (379MB/s)(3635MiB/10056msec); 0 zone resets 00:26:16.644 slat (usec): min=16, max=16048, avg=683.64, stdev=1357.24 00:26:16.644 clat (msec): min=2, max=114, avg=43.57, stdev=17.37 00:26:16.644 lat (msec): min=2, max=114, avg=44.25, stdev=17.63 00:26:16.644 clat percentiles (msec): 00:26:16.644 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 35], 00:26:16.644 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 40], 00:26:16.644 | 70.00th=[ 55], 80.00th=[ 58], 90.00th=[ 71], 95.00th=[ 75], 00:26:16.644 | 99.00th=[ 91], 99.50th=[ 93], 99.90th=[ 103], 99.95th=[ 107], 00:26:16.644 | 99.99th=[ 111] 00:26:16.644 bw ( KiB/s): min=185344, max=695808, per=10.79%, avg=370907.80, stdev=137327.40, samples=20 00:26:16.644 iops : min= 724, max= 2718, avg=1448.55, stdev=536.50, samples=20 00:26:16.644 lat (msec) : 4=0.06%, 10=0.03%, 20=14.13%, 50=51.40%, 100=34.26% 00:26:16.644 lat (msec) : 250=0.13% 00:26:16.644 cpu : usr=3.38%, sys=5.14%, ctx=3496, majf=0, minf=1 00:26:16.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:16.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.644 issued rwts: total=0,14540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.644 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.644 job1: (groupid=0, jobs=1): err= 0: pid=3630426: Sat Jul 13 21:13:06 2024 00:26:16.644 write: IOPS=1226, BW=307MiB/s (322MB/s)(3085MiB/10057msec); 0 zone resets 00:26:16.644 slat (usec): min=21, max=27218, avg=778.12, stdev=2049.02 00:26:16.644 clat (usec): min=782, max=134957, avg=51369.41, stdev=18833.65 00:26:16.644 lat (usec): min=1104, max=134996, avg=52147.53, stdev=19187.48 00:26:16.644 clat percentiles (msec): 00:26:16.644 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:26:16.644 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 44], 60.00th=[ 55], 00:26:16.644 | 70.00th=[ 60], 80.00th=[ 72], 90.00th=[ 75], 95.00th=[ 87], 00:26:16.644 | 99.00th=[ 94], 99.50th=[ 97], 99.90th=[ 125], 99.95th=[ 129], 00:26:16.644 | 99.99th=[ 129] 00:26:16.644 bw ( KiB/s): min=183296, max=443904, per=9.15%, avg=314555.90, stdev=94358.95, samples=20 00:26:16.644 iops : min= 716, max= 1734, avg=1228.50, stdev=368.65, samples=20 00:26:16.644 lat (usec) : 1000=0.01% 00:26:16.644 lat (msec) : 2=0.14%, 4=0.24%, 10=1.05%, 20=1.13%, 50=49.15% 00:26:16.644 lat (msec) : 100=48.01%, 250=0.26% 00:26:16.644 cpu : usr=2.46%, sys=4.21%, ctx=3036, majf=0, minf=1 00:26:16.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:16.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.644 issued rwts: total=0,12338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.644 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.644 job2: (groupid=0, jobs=1): err= 0: pid=3630427: Sat Jul 13 21:13:06 2024 00:26:16.644 write: IOPS=1009, BW=252MiB/s (265MB/s)(2537MiB/10056msec); 0 zone resets 00:26:16.644 slat (usec): min=21, max=39093, avg=947.71, stdev=2386.26 00:26:16.644 clat (msec): min=18, max=127, avg=62.46, stdev=17.84 00:26:16.644 lat (msec): min=18, max=142, avg=63.40, stdev=18.20 00:26:16.644 clat percentiles (msec): 00:26:16.644 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 38], 00:26:16.644 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 71], 60.00th=[ 73], 00:26:16.644 | 70.00th=[ 74], 80.00th=[ 77], 90.00th=[ 81], 95.00th=[ 90], 00:26:16.644 | 99.00th=[ 95], 99.50th=[ 101], 99.90th=[ 114], 99.95th=[ 123], 00:26:16.644 | 99.99th=[ 128] 00:26:16.644 bw ( KiB/s): min=178533, max=446845, per=7.52%, avg=258431.75, stdev=75847.28, samples=20 00:26:16.644 iops : min= 697, max= 1745, avg=1009.15, stdev=296.29, samples=20 00:26:16.644 lat (msec) : 20=0.02%, 50=25.13%, 100=74.29%, 250=0.56% 00:26:16.644 cpu : usr=2.33%, sys=3.49%, ctx=2447, majf=0, minf=1 00:26:16.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:16.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.644 issued rwts: total=0,10147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.644 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.644 job3: (groupid=0, jobs=1): err= 0: pid=3630430: Sat Jul 13 21:13:06 2024 00:26:16.644 write: IOPS=998, BW=250MiB/s (262MB/s)(2510MiB/10055msec); 0 zone resets 00:26:16.644 slat (usec): min=22, max=15240, avg=973.31, stdev=1803.72 00:26:16.644 clat (msec): min=13, max=111, avg=63.09, stdev=12.73 00:26:16.644 lat (msec): min=13, max=111, avg=64.07, stdev=12.92 00:26:16.644 clat percentiles (msec): 00:26:16.644 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 53], 20.00th=[ 55], 00:26:16.644 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 69], 00:26:16.644 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 78], 95.00th=[ 84], 00:26:16.644 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 109], 00:26:16.644 | 99.99th=[ 109] 00:26:16.644 bw ( KiB/s): min=186368, max=400673, per=7.44%, avg=255727.45, stdev=48677.63, samples=20 00:26:16.644 iops : min= 728, max= 1565, avg=998.70, stdev=190.22, samples=20 00:26:16.644 lat (msec) : 20=0.08%, 50=9.22%, 100=90.54%, 250=0.16% 00:26:16.644 cpu : usr=2.43%, sys=4.14%, ctx=2555, majf=0, minf=1 00:26:16.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:16.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.644 issued rwts: total=0,10041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.644 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.644 job4: (groupid=0, jobs=1): err= 0: pid=3630431: Sat Jul 13 21:13:06 2024 00:26:16.644 write: IOPS=1166, BW=292MiB/s (306MB/s)(2924MiB/10028msec); 0 zone resets 00:26:16.644 slat (usec): min=19, max=38817, avg=809.47, stdev=1610.25 00:26:16.644 clat (msec): min=7, max=127, avg=54.05, stdev=11.93 00:26:16.644 lat (msec): min=7, max=127, avg=54.86, stdev=12.14 00:26:16.644 clat percentiles (msec): 00:26:16.644 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 41], 00:26:16.644 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:26:16.644 | 70.00th=[ 59], 80.00th=[ 59], 90.00th=[ 62], 95.00th=[ 71], 00:26:16.644 | 99.00th=[ 90], 99.50th=[ 93], 99.90th=[ 96], 99.95th=[ 113], 00:26:16.644 | 99.99th=[ 122] 00:26:16.644 bw ( KiB/s): min=204288, max=432480, per=8.67%, avg=298092.75, stdev=56010.36, samples=20 00:26:16.644 iops : min= 798, max= 1689, avg=1164.30, stdev=218.74, samples=20 00:26:16.644 lat (msec) : 10=0.19%, 20=1.10%, 50=21.03%, 100=77.63%, 250=0.05% 00:26:16.644 cpu : usr=2.75%, sys=4.56%, ctx=3036, majf=0, minf=1 00:26:16.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:16.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.644 issued rwts: total=0,11694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.644 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.644 job5: (groupid=0, jobs=1): err= 0: pid=3630433: Sat Jul 13 21:13:06 2024 00:26:16.644 write: IOPS=1337, BW=334MiB/s (351MB/s)(3353MiB/10026msec); 0 zone resets 00:26:16.644 slat (usec): min=18, max=9100, avg=741.24, stdev=1369.23 00:26:16.644 clat (usec): min=12873, max=80334, avg=47090.08, stdev=14117.83 00:26:16.644 lat (usec): min=12930, max=80398, avg=47831.32, stdev=14327.38 00:26:16.644 clat percentiles (usec): 00:26:16.644 | 1.00th=[16909], 5.00th=[17695], 10.00th=[18482], 20.00th=[36439], 00:26:16.644 | 30.00th=[38536], 40.00th=[49021], 50.00th=[54789], 60.00th=[56361], 00:26:16.644 | 70.00th=[57410], 80.00th=[58459], 90.00th=[59507], 95.00th=[60556], 00:26:16.644 | 99.00th=[63177], 99.50th=[67634], 99.90th=[72877], 99.95th=[74974], 00:26:16.644 | 99.99th=[77071] 00:26:16.644 bw ( KiB/s): min=276992, max=849920, per=9.95%, avg=342073.45, stdev=131686.30, samples=20 00:26:16.644 iops : min= 1082, max= 3320, avg=1336.00, stdev=514.39, samples=20 00:26:16.644 lat (msec) : 20=11.77%, 50=28.49%, 100=59.74% 00:26:16.644 cpu : usr=3.09%, sys=5.06%, ctx=3217, majf=0, minf=1 00:26:16.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:16.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.644 issued rwts: total=0,13411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.644 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.644 job6: (groupid=0, jobs=1): err= 0: pid=3630434: Sat Jul 13 21:13:06 2024 00:26:16.644 write: IOPS=989, BW=247MiB/s (259MB/s)(2488MiB/10058msec); 0 zone resets 00:26:16.644 slat (usec): min=16, max=36591, avg=984.50, stdev=2259.50 00:26:16.644 clat (msec): min=4, max=128, avg=63.68, stdev=16.27 00:26:16.644 lat (msec): min=4, max=129, avg=64.67, stdev=16.57 00:26:16.644 clat percentiles (msec): 00:26:16.644 | 1.00th=[ 22], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 55], 00:26:16.644 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 66], 60.00th=[ 72], 00:26:16.645 | 70.00th=[ 74], 80.00th=[ 77], 90.00th=[ 81], 95.00th=[ 90], 00:26:16.645 | 99.00th=[ 95], 99.50th=[ 102], 99.90th=[ 120], 99.95th=[ 129], 00:26:16.645 | 99.99th=[ 129] 00:26:16.645 bw ( KiB/s): min=179559, max=435200, per=7.37%, avg=253357.35, stdev=59196.51, samples=20 00:26:16.645 iops : min= 701, max= 1700, avg=989.40, stdev=231.29, samples=20 00:26:16.645 lat (msec) : 10=0.32%, 20=0.60%, 50=15.32%, 100=83.23%, 250=0.53% 00:26:16.645 cpu : usr=2.13%, sys=4.05%, ctx=2504, majf=0, minf=1 00:26:16.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:16.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.645 issued rwts: total=0,9950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.645 job7: (groupid=0, jobs=1): err= 0: pid=3630435: Sat Jul 13 21:13:06 2024 00:26:16.645 write: IOPS=1316, BW=329MiB/s (345MB/s)(3309MiB/10054msec); 0 zone resets 00:26:16.645 slat (usec): min=17, max=28627, avg=722.71, stdev=1655.26 00:26:16.645 clat (usec): min=715, max=111934, avg=47878.66, stdev=18949.29 00:26:16.645 lat (usec): min=782, max=120200, avg=48601.38, stdev=19251.16 00:26:16.645 clat percentiles (msec): 00:26:16.645 | 1.00th=[ 11], 5.00th=[ 19], 10.00th=[ 21], 20.00th=[ 36], 00:26:16.645 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 40], 60.00th=[ 55], 00:26:16.645 | 70.00th=[ 57], 80.00th=[ 63], 90.00th=[ 74], 95.00th=[ 85], 00:26:16.645 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 102], 99.95th=[ 106], 00:26:16.645 | 99.99th=[ 112] 00:26:16.645 bw ( KiB/s): min=187392, max=682324, per=9.82%, avg=337597.65, stdev=118599.34, samples=20 00:26:16.645 iops : min= 732, max= 2665, avg=1318.40, stdev=463.32, samples=20 00:26:16.645 lat (usec) : 750=0.01%, 1000=0.04% 00:26:16.645 lat (msec) : 2=0.12%, 4=0.18%, 10=0.57%, 20=9.14%, 50=45.50% 00:26:16.645 lat (msec) : 100=44.27%, 250=0.17% 00:26:16.645 cpu : usr=2.74%, sys=3.93%, ctx=3282, majf=0, minf=1 00:26:16.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:16.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.645 issued rwts: total=0,13235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.645 job8: (groupid=0, jobs=1): err= 0: pid=3630436: Sat Jul 13 21:13:06 2024 00:26:16.645 write: IOPS=1097, BW=274MiB/s (288MB/s)(2759MiB/10055msec); 0 zone resets 00:26:16.645 slat (usec): min=23, max=43044, avg=885.57, stdev=1712.77 00:26:16.645 clat (msec): min=4, max=115, avg=57.41, stdev=11.37 00:26:16.645 lat (msec): min=4, max=115, avg=58.30, stdev=11.53 00:26:16.645 clat percentiles (msec): 00:26:16.645 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 55], 00:26:16.645 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:26:16.645 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 78], 00:26:16.645 | 99.00th=[ 91], 99.50th=[ 93], 99.90th=[ 104], 99.95th=[ 107], 00:26:16.645 | 99.99th=[ 115] 00:26:16.645 bw ( KiB/s): min=187392, max=366592, per=8.18%, avg=281170.45, stdev=37797.20, samples=20 00:26:16.645 iops : min= 732, max= 1432, avg=1098.10, stdev=147.66, samples=20 00:26:16.645 lat (msec) : 10=0.20%, 20=0.35%, 50=12.64%, 100=86.68%, 250=0.13% 00:26:16.645 cpu : usr=2.45%, sys=4.14%, ctx=2793, majf=0, minf=1 00:26:16.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:16.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.645 issued rwts: total=0,11035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.645 job9: (groupid=0, jobs=1): err= 0: pid=3630437: Sat Jul 13 21:13:06 2024 00:26:16.645 write: IOPS=1496, BW=374MiB/s (392MB/s)(3747MiB/10013msec); 0 zone resets 00:26:16.645 slat (usec): min=15, max=16561, avg=656.25, stdev=1422.35 00:26:16.645 clat (msec): min=11, max=102, avg=42.08, stdev=24.00 00:26:16.645 lat (msec): min=12, max=106, avg=42.74, stdev=24.36 00:26:16.645 clat percentiles (msec): 00:26:16.645 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:26:16.645 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 38], 60.00th=[ 55], 00:26:16.645 | 70.00th=[ 58], 80.00th=[ 72], 90.00th=[ 77], 95.00th=[ 79], 00:26:16.645 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 99], 99.95th=[ 100], 00:26:16.645 | 99.99th=[ 102] 00:26:16.645 bw ( KiB/s): min=188416, max=893691, per=11.12%, avg=382492.40, stdev=243096.31, samples=20 00:26:16.645 iops : min= 736, max= 3490, avg=1493.80, stdev=949.59, samples=20 00:26:16.645 lat (msec) : 20=40.93%, 50=16.82%, 100=42.24%, 250=0.01% 00:26:16.645 cpu : usr=2.81%, sys=4.81%, ctx=3437, majf=0, minf=1 00:26:16.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:26:16.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.645 issued rwts: total=0,14989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.645 job10: (groupid=0, jobs=1): err= 0: pid=3630438: Sat Jul 13 21:13:06 2024 00:26:16.645 write: IOPS=1363, BW=341MiB/s (358MB/s)(3429MiB/10056msec); 0 zone resets 00:26:16.645 slat (usec): min=19, max=60493, avg=703.21, stdev=2024.51 00:26:16.645 clat (msec): min=3, max=130, avg=46.21, stdev=18.97 00:26:16.645 lat (msec): min=3, max=144, avg=46.91, stdev=19.32 00:26:16.645 clat percentiles (msec): 00:26:16.645 | 1.00th=[ 18], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 36], 00:26:16.645 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 39], 00:26:16.645 | 70.00th=[ 54], 80.00th=[ 67], 90.00th=[ 74], 95.00th=[ 89], 00:26:16.645 | 99.00th=[ 95], 99.50th=[ 99], 99.90th=[ 122], 99.95th=[ 127], 00:26:16.645 | 99.99th=[ 127] 00:26:16.645 bw ( KiB/s): min=178533, max=646412, per=10.17%, avg=349841.65, stdev=111404.93, samples=20 00:26:16.645 iops : min= 697, max= 2525, avg=1366.30, stdev=435.24, samples=20 00:26:16.645 lat (msec) : 4=0.01%, 10=0.08%, 20=5.96%, 50=61.96%, 100=31.62% 00:26:16.645 lat (msec) : 250=0.37% 00:26:16.645 cpu : usr=2.67%, sys=3.87%, ctx=3166, majf=0, minf=1 00:26:16.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:26:16.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:16.645 issued rwts: total=0,13714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:16.645 00:26:16.645 Run status group 0 (all jobs): 00:26:16.645 WRITE: bw=3358MiB/s (3521MB/s), 247MiB/s-374MiB/s (259MB/s-392MB/s), io=33.0GiB (35.4GB), run=10013-10058msec 00:26:16.645 00:26:16.645 Disk stats (read/write): 00:26:16.645 nvme0n1: ios=49/28962, merge=0/0, ticks=12/1234289, in_queue=1234301, util=95.71% 00:26:16.645 nvme10n1: ios=0/24553, merge=0/0, ticks=0/1235065, in_queue=1235065, util=95.89% 00:26:16.645 nvme1n1: ios=0/20169, merge=0/0, ticks=0/1233083, in_queue=1233083, util=96.36% 00:26:16.645 nvme2n1: ios=0/19956, merge=0/0, ticks=0/1232171, in_queue=1232171, util=96.64% 00:26:16.645 nvme3n1: ios=0/23269, merge=0/0, ticks=0/1234292, in_queue=1234292, util=96.77% 00:26:16.645 nvme4n1: ios=0/26695, merge=0/0, ticks=0/1234128, in_queue=1234128, util=97.34% 00:26:16.645 nvme5n1: ios=0/19773, merge=0/0, ticks=0/1230569, in_queue=1230569, util=97.63% 00:26:16.645 nvme6n1: ios=0/26346, merge=0/0, ticks=0/1233829, in_queue=1233829, util=97.83% 00:26:16.645 nvme7n1: ios=0/21944, merge=0/0, ticks=0/1233001, in_queue=1233001, util=98.52% 00:26:16.645 nvme8n1: ios=0/29851, merge=0/0, ticks=0/1236608, in_queue=1236608, util=98.85% 00:26:16.645 nvme9n1: ios=0/27305, merge=0/0, ticks=0/1233821, in_queue=1233821, util=99.09% 00:26:16.645 21:13:06 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:16.645 21:13:06 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:16.645 21:13:06 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.645 21:13:06 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:17.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:17.214 21:13:07 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:17.214 21:13:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:17.214 21:13:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:17.214 21:13:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:26:17.214 21:13:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:17.214 21:13:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:26:17.214 21:13:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:17.214 21:13:08 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.214 21:13:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.214 21:13:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.214 21:13:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.214 21:13:08 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.214 21:13:08 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:18.184 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:18.184 21:13:08 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:18.184 21:13:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:18.184 21:13:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:18.184 21:13:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:26:18.184 21:13:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:18.184 21:13:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:26:18.184 21:13:09 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:18.184 21:13:09 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:18.184 21:13:09 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.184 21:13:09 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.184 21:13:09 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.184 21:13:09 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.184 21:13:09 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:19.122 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:19.122 21:13:09 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:19.122 21:13:09 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:19.122 21:13:09 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:19.122 21:13:09 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:26:19.122 21:13:09 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:19.122 21:13:09 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:26:19.122 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:19.122 21:13:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:19.122 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.381 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.381 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.381 21:13:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.381 21:13:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:20.319 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:20.319 21:13:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:20.319 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:20.319 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:20.319 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:26:20.319 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:20.319 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:26:20.319 21:13:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:20.319 21:13:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:20.319 21:13:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.319 21:13:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:20.319 21:13:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.319 21:13:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.319 21:13:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:21.258 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:21.258 21:13:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:21.258 21:13:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:21.258 21:13:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:21.258 21:13:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:26:21.258 21:13:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:21.258 21:13:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:26:21.258 21:13:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:21.258 21:13:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:21.258 21:13:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.258 21:13:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.258 21:13:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.258 21:13:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.258 21:13:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:22.195 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:22.195 21:13:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:22.195 21:13:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:22.195 21:13:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:22.195 21:13:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:26:22.195 21:13:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:26:22.195 21:13:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:22.195 21:13:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:22.195 21:13:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:22.195 21:13:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.195 21:13:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.195 21:13:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.195 21:13:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.195 21:13:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:23.132 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:23.132 21:13:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:23.132 21:13:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:23.132 21:13:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:23.132 21:13:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:26:23.132 21:13:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:23.132 21:13:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:26:23.391 21:13:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:23.391 21:13:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:23.391 21:13:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.391 21:13:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.391 21:13:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.391 21:13:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.391 21:13:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:24.328 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:24.328 21:13:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:24.328 21:13:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:24.328 21:13:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:24.328 21:13:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:26:24.328 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:24.328 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:26:24.328 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:24.328 21:13:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:24.328 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.328 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.328 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.328 21:13:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.328 21:13:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:25.266 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:25.266 21:13:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:25.266 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:25.266 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:25.266 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:26:25.266 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:25.266 21:13:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:26:25.266 21:13:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:25.266 21:13:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:25.266 21:13:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.266 21:13:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.266 21:13:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.266 21:13:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.266 21:13:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:26.204 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:26.204 21:13:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:26.204 21:13:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:26.204 21:13:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:26.204 21:13:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:26:26.204 21:13:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:26.204 21:13:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:26:26.204 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:26.204 21:13:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:26.204 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.204 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.204 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.204 21:13:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.204 21:13:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:27.141 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:27.141 21:13:17 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:27.141 rmmod nvme_rdma 00:26:27.141 rmmod nvme_fabrics 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3622446 ']' 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3622446 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 3622446 ']' 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 3622446 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3622446 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3622446' 00:26:27.401 killing process with pid 3622446 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 3622446 00:26:27.401 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 3622446 00:26:27.970 21:13:18 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:27.970 21:13:18 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:27.970 00:26:27.970 real 1m15.090s 00:26:27.970 user 4m53.037s 00:26:27.970 sys 0m19.068s 00:26:27.970 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:27.970 21:13:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:27.970 ************************************ 00:26:27.970 END TEST nvmf_multiconnection 00:26:27.970 ************************************ 00:26:27.970 21:13:18 nvmf_rdma -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:26:27.970 21:13:18 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:27.970 21:13:18 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:27.970 21:13:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:27.970 ************************************ 00:26:27.970 START TEST nvmf_initiator_timeout 00:26:27.970 ************************************ 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:26:27.970 * Looking for test storage... 00:26:27.970 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.970 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:27.971 21:13:18 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:34.540 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:34.540 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:34.540 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:34.540 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:34.540 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # rdma_device_init 00:26:34.541 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:34.541 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # uname 00:26:34.541 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:34.541 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:34.541 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:34.541 21:13:24 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:34.541 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:34.541 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:34.541 altname enp217s0f0np0 00:26:34.541 altname ens818f0np0 00:26:34.541 inet 192.168.100.8/24 scope global mlx_0_0 00:26:34.541 valid_lft forever preferred_lft forever 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:34.541 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:34.541 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:34.541 altname enp217s0f1np1 00:26:34.541 altname ens818f1np1 00:26:34.541 inet 192.168.100.9/24 scope global mlx_0_1 00:26:34.541 valid_lft forever preferred_lft forever 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:34.541 192.168.100.9' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:34.541 192.168.100.9' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # head -n 1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:34.541 192.168.100.9' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # tail -n +2 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # head -n 1 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3637155 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3637155 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 3637155 ']' 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.541 21:13:25 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:34.541 [2024-07-13 21:13:25.254501] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:34.541 [2024-07-13 21:13:25.254551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.541 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.541 [2024-07-13 21:13:25.326227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.541 [2024-07-13 21:13:25.365955] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.542 [2024-07-13 21:13:25.365996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.542 [2024-07-13 21:13:25.366006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.542 [2024-07-13 21:13:25.366029] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.542 [2024-07-13 21:13:25.366036] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.542 [2024-07-13 21:13:25.366086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.542 [2024-07-13 21:13:25.366185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.542 [2024-07-13 21:13:25.366268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:34.542 [2024-07-13 21:13:25.366270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.480 Malloc0 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.480 Delay0 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.480 [2024-07-13 21:13:26.176827] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc60a30/0xdb8940) succeed. 00:26:35.480 [2024-07-13 21:13:26.187615] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd10c50/0xc78740) succeed. 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.480 [2024-07-13 21:13:26.330886] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.480 21:13:26 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:26:36.857 21:13:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:36.857 21:13:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:26:36.857 21:13:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:36.857 21:13:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:36.857 21:13:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:26:38.770 21:13:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:38.770 21:13:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:38.770 21:13:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:26:38.770 21:13:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:38.770 21:13:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:38.770 21:13:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:26:38.770 21:13:29 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3637731 00:26:38.770 21:13:29 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:38.770 21:13:29 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:38.770 [global] 00:26:38.770 thread=1 00:26:38.770 invalidate=1 00:26:38.770 rw=write 00:26:38.770 time_based=1 00:26:38.770 runtime=60 00:26:38.770 ioengine=libaio 00:26:38.770 direct=1 00:26:38.770 bs=4096 00:26:38.770 iodepth=1 00:26:38.770 norandommap=0 00:26:38.770 numjobs=1 00:26:38.770 00:26:38.770 verify_dump=1 00:26:38.770 verify_backlog=512 00:26:38.770 verify_state_save=0 00:26:38.770 do_verify=1 00:26:38.770 verify=crc32c-intel 00:26:38.770 [job0] 00:26:38.770 filename=/dev/nvme0n1 00:26:38.770 Could not set queue depth (nvme0n1) 00:26:39.029 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:39.029 fio-3.35 00:26:39.029 Starting 1 thread 00:26:41.602 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.603 true 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.603 true 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.603 true 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.603 true 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.603 21:13:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.924 true 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.924 true 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.924 true 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.924 true 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:44.924 21:13:35 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3637731 00:27:41.157 00:27:41.157 job0: (groupid=0, jobs=1): err= 0: pid=3637978: Sat Jul 13 21:14:29 2024 00:27:41.157 read: IOPS=1254, BW=5018KiB/s (5138kB/s)(294MiB/60000msec) 00:27:41.158 slat (usec): min=2, max=267, avg= 9.01, stdev= 1.52 00:27:41.158 clat (usec): min=36, max=479, avg=103.41, stdev= 7.58 00:27:41.158 lat (usec): min=86, max=507, avg=112.42, stdev= 7.77 00:27:41.158 clat percentiles (usec): 00:27:41.158 | 1.00th=[ 91], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 97], 00:27:41.158 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 105], 00:27:41.158 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 113], 95.00th=[ 116], 00:27:41.158 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 131], 99.95th=[ 145], 00:27:41.158 | 99.99th=[ 281] 00:27:41.158 write: IOPS=1261, BW=5045KiB/s (5166kB/s)(296MiB/60000msec); 0 zone resets 00:27:41.158 slat (usec): min=3, max=6042, avg=11.20, stdev=22.01 00:27:41.158 clat (usec): min=73, max=42723k, avg=665.66, stdev=155298.33 00:27:41.158 lat (usec): min=84, max=42723k, avg=676.85, stdev=155298.33 00:27:41.158 clat percentiles (usec): 00:27:41.158 | 1.00th=[ 89], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 95], 00:27:41.158 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 103], 00:27:41.158 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 114], 00:27:41.158 | 99.00th=[ 119], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 137], 00:27:41.158 | 99.99th=[ 255] 00:27:41.158 bw ( KiB/s): min= 3152, max=19560, per=100.00%, avg=16852.11, stdev=3054.61, samples=35 00:27:41.158 iops : min= 788, max= 4890, avg=4213.03, stdev=763.65, samples=35 00:27:41.158 lat (usec) : 50=0.01%, 100=40.06%, 250=59.92%, 500=0.01% 00:27:41.158 lat (msec) : 2=0.01%, >=2000=0.01% 00:27:41.158 cpu : usr=1.73%, sys=3.37%, ctx=150951, majf=0, minf=141 00:27:41.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.158 issued rwts: total=75264,75680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:41.158 00:27:41.158 Run status group 0 (all jobs): 00:27:41.158 READ: bw=5018KiB/s (5138kB/s), 5018KiB/s-5018KiB/s (5138kB/s-5138kB/s), io=294MiB (308MB), run=60000-60000msec 00:27:41.158 WRITE: bw=5045KiB/s (5166kB/s), 5045KiB/s-5045KiB/s (5166kB/s-5166kB/s), io=296MiB (310MB), run=60000-60000msec 00:27:41.158 00:27:41.158 Disk stats (read/write): 00:27:41.158 nvme0n1: ios=75125/75264, merge=0/0, ticks=7185/7201, in_queue=14386, util=99.68% 00:27:41.158 21:14:29 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:41.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:41.158 nvmf hotplug test: fio successful as expected 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:41.158 rmmod nvme_rdma 00:27:41.158 rmmod nvme_fabrics 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3637155 ']' 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3637155 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 3637155 ']' 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 3637155 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3637155 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3637155' 00:27:41.158 killing process with pid 3637155 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 3637155 00:27:41.158 21:14:30 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 3637155 00:27:41.158 21:14:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.158 21:14:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:41.158 00:27:41.158 real 1m12.548s 00:27:41.158 user 4m33.920s 00:27:41.158 sys 0m7.541s 00:27:41.158 21:14:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:41.158 21:14:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.158 ************************************ 00:27:41.158 END TEST nvmf_initiator_timeout 00:27:41.158 ************************************ 00:27:41.158 21:14:31 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:41.158 21:14:31 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:27:41.158 21:14:31 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:27:41.158 21:14:31 nvmf_rdma -- nvmf/nvmf.sh@81 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:27:41.158 21:14:31 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:41.158 21:14:31 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:41.158 21:14:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:41.158 ************************************ 00:27:41.158 START TEST nvmf_srq_overwhelm 00:27:41.158 ************************************ 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:27:41.158 * Looking for test storage... 00:27:41.158 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.158 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:27:41.159 21:14:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:47.747 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:47.747 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:47.747 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:47.747 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:47.747 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:47.748 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:47.748 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:47.748 altname enp217s0f0np0 00:27:47.748 altname ens818f0np0 00:27:47.748 inet 192.168.100.8/24 scope global mlx_0_0 00:27:47.748 valid_lft forever preferred_lft forever 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:47.748 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:47.748 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:47.748 altname enp217s0f1np1 00:27:47.748 altname ens818f1np1 00:27:47.748 inet 192.168.100.9/24 scope global mlx_0_1 00:27:47.748 valid_lft forever preferred_lft forever 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:47.748 192.168.100.9' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:47.748 192.168.100.9' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:47.748 192.168.100.9' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=3651215 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 3651215 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@827 -- # '[' -z 3651215 ']' 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.748 21:14:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.748 [2024-07-13 21:14:37.941582] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:47.748 [2024-07-13 21:14:37.941634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.748 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.748 [2024-07-13 21:14:38.011829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.748 [2024-07-13 21:14:38.051684] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.748 [2024-07-13 21:14:38.051725] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.748 [2024-07-13 21:14:38.051735] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.748 [2024-07-13 21:14:38.051743] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.748 [2024-07-13 21:14:38.051750] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.748 [2024-07-13 21:14:38.051803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.748 [2024-07-13 21:14:38.051896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.748 [2024-07-13 21:14:38.051982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.748 [2024-07-13 21:14:38.051983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.748 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:47.748 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # return 0 00:27:47.748 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.749 [2024-07-13 21:14:38.223371] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x885c80/0x88a170) succeed. 00:27:47.749 [2024-07-13 21:14:38.233638] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8872c0/0x8cb800) succeed. 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.749 Malloc0 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:47.749 [2024-07-13 21:14:38.332521] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.749 21:14:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme0n1 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:48.686 Malloc1 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.686 21:14:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme1n1 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme1n1 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:49.623 Malloc2 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.623 21:14:40 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme2n1 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme2n1 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.560 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:50.819 Malloc3 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.819 21:14:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme3n1 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme3n1 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 Malloc4 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.755 21:14:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme4n1 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme4n1 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:52.691 Malloc5 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.691 21:14:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:27:54.081 21:14:44 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:27:54.081 21:14:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:27:54.081 21:14:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:27:54.081 21:14:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme5n1 00:27:54.081 21:14:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:54.081 21:14:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme5n1 00:27:54.081 21:14:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:27:54.081 21:14:44 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:27:54.081 [global] 00:27:54.081 thread=1 00:27:54.081 invalidate=1 00:27:54.081 rw=read 00:27:54.081 time_based=1 00:27:54.081 runtime=10 00:27:54.081 ioengine=libaio 00:27:54.081 direct=1 00:27:54.081 bs=1048576 00:27:54.081 iodepth=128 00:27:54.081 norandommap=1 00:27:54.081 numjobs=13 00:27:54.081 00:27:54.081 [job0] 00:27:54.081 filename=/dev/nvme0n1 00:27:54.081 [job1] 00:27:54.081 filename=/dev/nvme1n1 00:27:54.081 [job2] 00:27:54.081 filename=/dev/nvme2n1 00:27:54.081 [job3] 00:27:54.081 filename=/dev/nvme3n1 00:27:54.081 [job4] 00:27:54.081 filename=/dev/nvme4n1 00:27:54.081 [job5] 00:27:54.081 filename=/dev/nvme5n1 00:27:54.081 Could not set queue depth (nvme0n1) 00:27:54.081 Could not set queue depth (nvme1n1) 00:27:54.081 Could not set queue depth (nvme2n1) 00:27:54.081 Could not set queue depth (nvme3n1) 00:27:54.081 Could not set queue depth (nvme4n1) 00:27:54.081 Could not set queue depth (nvme5n1) 00:27:54.339 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:54.339 ... 00:27:54.339 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:54.339 ... 00:27:54.339 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:54.339 ... 00:27:54.339 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:54.339 ... 00:27:54.339 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:54.339 ... 00:27:54.339 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:54.339 ... 00:27:54.339 fio-3.35 00:27:54.339 Starting 78 threads 00:28:06.590 00:28:06.590 job0: (groupid=0, jobs=1): err= 0: pid=3652614: Sat Jul 13 21:14:55 2024 00:28:06.590 read: IOPS=26, BW=26.1MiB/s (27.3MB/s)(270MiB/10354msec) 00:28:06.590 slat (usec): min=54, max=2128.2k, avg=38118.59, stdev=240673.20 00:28:06.590 clat (msec): min=59, max=5647, avg=3348.22, stdev=2037.51 00:28:06.590 lat (msec): min=904, max=5651, avg=3386.34, stdev=2025.22 00:28:06.590 clat percentiles (msec): 00:28:06.590 | 1.00th=[ 902], 5.00th=[ 911], 10.00th=[ 911], 20.00th=[ 936], 00:28:06.590 | 30.00th=[ 953], 40.00th=[ 1385], 50.00th=[ 4597], 60.00th=[ 4866], 00:28:06.590 | 70.00th=[ 5067], 80.00th=[ 5269], 90.00th=[ 5470], 95.00th=[ 5604], 00:28:06.590 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:28:06.590 | 99.99th=[ 5671] 00:28:06.590 bw ( KiB/s): min= 2048, max=141312, per=1.45%, avg=58161.40, stdev=68182.04, samples=5 00:28:06.590 iops : min= 2, max= 138, avg=56.60, stdev=66.78, samples=5 00:28:06.590 lat (msec) : 100=0.37%, 1000=38.89%, 2000=0.74%, >=2000=60.00% 00:28:06.590 cpu : usr=0.00%, sys=1.33%, ctx=252, majf=0, minf=32769 00:28:06.590 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.9%, >=64=76.7% 00:28:06.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.590 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:28:06.590 issued rwts: total=270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.590 job0: (groupid=0, jobs=1): err= 0: pid=3652616: Sat Jul 13 21:14:55 2024 00:28:06.590 read: IOPS=149, BW=149MiB/s (157MB/s)(1501MiB/10046msec) 00:28:06.590 slat (usec): min=36, max=2107.1k, avg=6659.47, stdev=56411.57 00:28:06.590 clat (msec): min=42, max=5378, avg=805.99, stdev=948.15 00:28:06.590 lat (msec): min=46, max=5575, avg=812.65, stdev=956.15 00:28:06.590 clat percentiles (msec): 00:28:06.590 | 1.00th=[ 92], 5.00th=[ 118], 10.00th=[ 138], 20.00th=[ 234], 00:28:06.590 | 30.00th=[ 239], 40.00th=[ 288], 50.00th=[ 542], 60.00th=[ 818], 00:28:06.590 | 70.00th=[ 860], 80.00th=[ 978], 90.00th=[ 1250], 95.00th=[ 3608], 00:28:06.590 | 99.00th=[ 3675], 99.50th=[ 3675], 99.90th=[ 5336], 99.95th=[ 5403], 00:28:06.590 | 99.99th=[ 5403] 00:28:06.591 bw ( KiB/s): min= 4096, max=673792, per=4.39%, avg=175872.00, stdev=181414.54, samples=16 00:28:06.591 iops : min= 4, max= 658, avg=171.75, stdev=177.16, samples=16 00:28:06.591 lat (msec) : 50=0.20%, 100=1.07%, 250=36.84%, 500=10.59%, 750=9.93% 00:28:06.591 lat (msec) : 1000=22.25%, 2000=10.46%, >=2000=8.66% 00:28:06.591 cpu : usr=0.08%, sys=2.23%, ctx=1728, majf=0, minf=32769 00:28:06.591 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:28:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.591 issued rwts: total=1501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.591 job0: (groupid=0, jobs=1): err= 0: pid=3652617: Sat Jul 13 21:14:55 2024 00:28:06.591 read: IOPS=21, BW=21.5MiB/s (22.5MB/s)(223MiB/10386msec) 00:28:06.591 slat (usec): min=522, max=2117.4k, avg=46302.14, stdev=272360.20 00:28:06.591 clat (msec): min=59, max=8584, avg=4121.52, stdev=2146.48 00:28:06.591 lat (msec): min=592, max=10205, avg=4167.82, stdev=2162.33 00:28:06.591 clat percentiles (msec): 00:28:06.591 | 1.00th=[ 592], 5.00th=[ 617], 10.00th=[ 651], 20.00th=[ 751], 00:28:06.591 | 30.00th=[ 4329], 40.00th=[ 4665], 50.00th=[ 5403], 60.00th=[ 5537], 00:28:06.591 | 70.00th=[ 5604], 80.00th=[ 5738], 90.00th=[ 5873], 95.00th=[ 5873], 00:28:06.591 | 99.00th=[ 6409], 99.50th=[ 6477], 99.90th=[ 8557], 99.95th=[ 8557], 00:28:06.591 | 99.99th=[ 8557] 00:28:06.591 bw ( KiB/s): min= 4096, max=184320, per=1.62%, avg=64853.33, stdev=103466.24, samples=3 00:28:06.591 iops : min= 4, max= 180, avg=63.33, stdev=101.04, samples=3 00:28:06.591 lat (msec) : 100=0.45%, 750=19.28%, 1000=5.83%, 2000=1.35%, >=2000=73.09% 00:28:06.591 cpu : usr=0.03%, sys=0.90%, ctx=442, majf=0, minf=32769 00:28:06.591 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.3%, >=64=71.7% 00:28:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:28:06.591 issued rwts: total=223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.591 job0: (groupid=0, jobs=1): err= 0: pid=3652618: Sat Jul 13 21:14:55 2024 00:28:06.591 read: IOPS=2, BW=2673KiB/s (2737kB/s)(27.0MiB/10343msec) 00:28:06.591 slat (usec): min=960, max=2133.9k, avg=381258.44, stdev=793823.96 00:28:06.591 clat (msec): min=48, max=10341, avg=8160.84, stdev=3122.52 00:28:06.591 lat (msec): min=2136, max=10342, avg=8542.10, stdev=2692.76 00:28:06.591 clat percentiles (msec): 00:28:06.591 | 1.00th=[ 49], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 6409], 00:28:06.591 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10268], 60.00th=[10268], 00:28:06.591 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:28:06.591 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.591 | 99.99th=[10402] 00:28:06.591 lat (msec) : 50=3.70%, >=2000=96.30% 00:28:06.591 cpu : usr=0.00%, sys=0.23%, ctx=77, majf=0, minf=6913 00:28:06.591 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:28:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:06.591 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.591 job0: (groupid=0, jobs=1): err= 0: pid=3652619: Sat Jul 13 21:14:55 2024 00:28:06.591 read: IOPS=40, BW=40.7MiB/s (42.7MB/s)(418MiB/10263msec) 00:28:06.591 slat (usec): min=44, max=2126.7k, avg=24431.11, stdev=205058.48 00:28:06.591 clat (msec): min=48, max=9048, avg=3026.62, stdev=3780.46 00:28:06.591 lat (msec): min=428, max=9051, avg=3051.05, stdev=3787.17 00:28:06.591 clat percentiles (msec): 00:28:06.591 | 1.00th=[ 430], 5.00th=[ 430], 10.00th=[ 447], 20.00th=[ 472], 00:28:06.591 | 30.00th=[ 514], 40.00th=[ 518], 50.00th=[ 523], 60.00th=[ 558], 00:28:06.591 | 70.00th=[ 4799], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 8926], 00:28:06.591 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:28:06.591 | 99.99th=[ 9060] 00:28:06.591 bw ( KiB/s): min= 2048, max=284672, per=2.47%, avg=98916.33, stdev=121744.39, samples=6 00:28:06.591 iops : min= 2, max= 278, avg=96.50, stdev=118.78, samples=6 00:28:06.591 lat (msec) : 50=0.24%, 500=26.79%, 750=41.15%, >=2000=31.82% 00:28:06.591 cpu : usr=0.00%, sys=1.33%, ctx=345, majf=0, minf=32769 00:28:06.591 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.7%, >=64=84.9% 00:28:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.591 issued rwts: total=418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.591 job0: (groupid=0, jobs=1): err= 0: pid=3652620: Sat Jul 13 21:14:55 2024 00:28:06.591 read: IOPS=1, BW=1293KiB/s (1324kB/s)(13.0MiB/10296msec) 00:28:06.591 slat (msec): min=10, max=2112, avg=788.17, stdev=1000.93 00:28:06.591 clat (msec): min=48, max=10182, avg=4582.23, stdev=3042.47 00:28:06.591 lat (msec): min=2120, max=10294, avg=5370.40, stdev=3096.79 00:28:06.591 clat percentiles (msec): 00:28:06.591 | 1.00th=[ 49], 5.00th=[ 49], 10.00th=[ 2123], 20.00th=[ 2140], 00:28:06.591 | 30.00th=[ 2165], 40.00th=[ 4245], 50.00th=[ 4279], 60.00th=[ 4329], 00:28:06.591 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[ 8557], 95.00th=[10134], 00:28:06.591 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:28:06.591 | 99.99th=[10134] 00:28:06.591 lat (msec) : 50=7.69%, >=2000=92.31% 00:28:06.591 cpu : usr=0.00%, sys=0.08%, ctx=56, majf=0, minf=3329 00:28:06.591 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.591 job0: (groupid=0, jobs=1): err= 0: pid=3652621: Sat Jul 13 21:14:55 2024 00:28:06.591 read: IOPS=4, BW=4465KiB/s (4572kB/s)(45.0MiB/10320msec) 00:28:06.591 slat (usec): min=937, max=2083.3k, avg=227862.22, stdev=631199.81 00:28:06.591 clat (msec): min=65, max=10316, avg=7265.75, stdev=2995.04 00:28:06.591 lat (msec): min=2121, max=10319, avg=7493.61, stdev=2819.69 00:28:06.591 clat percentiles (msec): 00:28:06.591 | 1.00th=[ 66], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:28:06.591 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:28:06.591 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:28:06.591 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:28:06.591 | 99.99th=[10268] 00:28:06.591 lat (msec) : 100=2.22%, >=2000=97.78% 00:28:06.591 cpu : usr=0.00%, sys=0.45%, ctx=58, majf=0, minf=11521 00:28:06.591 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:28:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.591 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.591 job0: (groupid=0, jobs=1): err= 0: pid=3652622: Sat Jul 13 21:14:55 2024 00:28:06.591 read: IOPS=6, BW=6170KiB/s (6318kB/s)(62.0MiB/10290msec) 00:28:06.591 slat (usec): min=535, max=2116.5k, avg=165181.58, stdev=531814.64 00:28:06.591 clat (msec): min=48, max=10266, avg=7153.57, stdev=1904.58 00:28:06.591 lat (msec): min=2148, max=10289, avg=7318.75, stdev=1712.68 00:28:06.591 clat percentiles (msec): 00:28:06.591 | 1.00th=[ 48], 5.00th=[ 4329], 10.00th=[ 6141], 20.00th=[ 6208], 00:28:06.591 | 30.00th=[ 6275], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:28:06.591 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[ 8658], 95.00th=[10268], 00:28:06.591 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:28:06.591 | 99.99th=[10268] 00:28:06.591 lat (msec) : 50=1.61%, >=2000=98.39% 00:28:06.591 cpu : usr=0.00%, sys=0.40%, ctx=134, majf=0, minf=15873 00:28:06.591 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:28:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.591 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.591 job0: (groupid=0, jobs=1): err= 0: pid=3652623: Sat Jul 13 21:14:55 2024 00:28:06.591 read: IOPS=29, BW=29.4MiB/s (30.9MB/s)(303MiB/10294msec) 00:28:06.591 slat (usec): min=41, max=2099.2k, avg=33768.69, stdev=225859.02 00:28:06.591 clat (msec): min=59, max=7169, avg=3364.02, stdev=2883.79 00:28:06.591 lat (msec): min=720, max=7169, avg=3397.79, stdev=2879.14 00:28:06.591 clat percentiles (msec): 00:28:06.591 | 1.00th=[ 718], 5.00th=[ 726], 10.00th=[ 726], 20.00th=[ 793], 00:28:06.591 | 30.00th=[ 810], 40.00th=[ 944], 50.00th=[ 1099], 60.00th=[ 6409], 00:28:06.591 | 70.00th=[ 6611], 80.00th=[ 6812], 90.00th=[ 7013], 95.00th=[ 7080], 00:28:06.591 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7148], 99.95th=[ 7148], 00:28:06.591 | 99.99th=[ 7148] 00:28:06.591 bw ( KiB/s): min= 4096, max=188416, per=1.79%, avg=71680.00, stdev=81329.07, samples=5 00:28:06.591 iops : min= 4, max= 184, avg=70.00, stdev=79.42, samples=5 00:28:06.591 lat (msec) : 100=0.33%, 750=17.16%, 1000=24.42%, 2000=13.53%, >=2000=44.55% 00:28:06.591 cpu : usr=0.02%, sys=0.89%, ctx=358, majf=0, minf=32769 00:28:06.591 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.2% 00:28:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:06.591 issued rwts: total=303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.591 job0: (groupid=0, jobs=1): err= 0: pid=3652624: Sat Jul 13 21:14:55 2024 00:28:06.591 read: IOPS=15, BW=15.8MiB/s (16.6MB/s)(165MiB/10448msec) 00:28:06.591 slat (usec): min=618, max=2125.4k, avg=62947.95, stdev=317592.27 00:28:06.591 clat (msec): min=59, max=10386, avg=7218.18, stdev=2671.34 00:28:06.591 lat (msec): min=2142, max=10387, avg=7281.12, stdev=2622.41 00:28:06.591 clat percentiles (msec): 00:28:06.591 | 1.00th=[ 2140], 5.00th=[ 2534], 10.00th=[ 2601], 20.00th=[ 2769], 00:28:06.591 | 30.00th=[ 7752], 40.00th=[ 7886], 50.00th=[ 8020], 60.00th=[ 8221], 00:28:06.591 | 70.00th=[ 8356], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10402], 00:28:06.591 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.591 | 99.99th=[10402] 00:28:06.591 bw ( KiB/s): min= 4096, max=59273, per=0.38%, avg=15131.40, stdev=24675.90, samples=5 00:28:06.591 iops : min= 4, max= 57, avg=14.60, stdev=23.70, samples=5 00:28:06.591 lat (msec) : 100=0.61%, >=2000=99.39% 00:28:06.591 cpu : usr=0.00%, sys=1.16%, ctx=256, majf=0, minf=32769 00:28:06.591 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.7%, 32=19.4%, >=64=61.8% 00:28:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.591 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:28:06.591 issued rwts: total=165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.592 job0: (groupid=0, jobs=1): err= 0: pid=3652625: Sat Jul 13 21:14:55 2024 00:28:06.592 read: IOPS=9, BW=9980KiB/s (10.2MB/s)(101MiB/10363msec) 00:28:06.592 slat (usec): min=526, max=2141.4k, avg=102145.22, stdev=390148.12 00:28:06.592 clat (msec): min=45, max=10358, avg=3402.25, stdev=3646.73 00:28:06.592 lat (msec): min=489, max=10362, avg=3504.40, stdev=3695.92 00:28:06.592 clat percentiles (msec): 00:28:06.592 | 1.00th=[ 489], 5.00th=[ 642], 10.00th=[ 827], 20.00th=[ 1020], 00:28:06.592 | 30.00th=[ 1183], 40.00th=[ 1469], 50.00th=[ 1720], 60.00th=[ 1871], 00:28:06.592 | 70.00th=[ 2089], 80.00th=[ 8557], 90.00th=[10268], 95.00th=[10402], 00:28:06.592 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.592 | 99.99th=[10402] 00:28:06.592 lat (msec) : 50=0.99%, 500=0.99%, 750=5.94%, 1000=9.90%, 2000=46.53% 00:28:06.592 lat (msec) : >=2000=35.64% 00:28:06.592 cpu : usr=0.01%, sys=0.71%, ctx=338, majf=0, minf=25857 00:28:06.592 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=7.9%, 16=15.8%, 32=31.7%, >=64=37.6% 00:28:06.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.592 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.592 issued rwts: total=101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.592 job0: (groupid=0, jobs=1): err= 0: pid=3652626: Sat Jul 13 21:14:55 2024 00:28:06.592 read: IOPS=61, BW=61.4MiB/s (64.4MB/s)(633MiB/10311msec) 00:28:06.592 slat (usec): min=42, max=2142.4k, avg=15801.24, stdev=156106.74 00:28:06.592 clat (msec): min=306, max=8489, avg=578.71, stdev=1129.48 00:28:06.592 lat (msec): min=310, max=8520, avg=594.51, stdev=1177.59 00:28:06.592 clat percentiles (msec): 00:28:06.592 | 1.00th=[ 317], 5.00th=[ 368], 10.00th=[ 368], 20.00th=[ 368], 00:28:06.592 | 30.00th=[ 368], 40.00th=[ 372], 50.00th=[ 372], 60.00th=[ 372], 00:28:06.592 | 70.00th=[ 376], 80.00th=[ 401], 90.00th=[ 542], 95.00th=[ 634], 00:28:06.592 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8490], 99.95th=[ 8490], 00:28:06.592 | 99.99th=[ 8490] 00:28:06.592 bw ( KiB/s): min=342016, max=348160, per=8.63%, avg=345429.33, stdev=3128.37, samples=3 00:28:06.592 iops : min= 334, max= 340, avg=337.33, stdev= 3.06, samples=3 00:28:06.592 lat (msec) : 500=88.31%, 750=8.85%, >=2000=2.84% 00:28:06.592 cpu : usr=0.00%, sys=1.12%, ctx=613, majf=0, minf=32769 00:28:06.592 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:28:06.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.592 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.592 issued rwts: total=633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.592 job0: (groupid=0, jobs=1): err= 0: pid=3652627: Sat Jul 13 21:14:55 2024 00:28:06.592 read: IOPS=1, BW=1993KiB/s (2041kB/s)(20.0MiB/10275msec) 00:28:06.592 slat (usec): min=945, max=2091.4k, avg=510737.69, stdev=874737.98 00:28:06.592 clat (msec): min=59, max=10193, avg=5006.96, stdev=3026.76 00:28:06.592 lat (msec): min=2121, max=10274, avg=5517.69, stdev=3009.82 00:28:06.592 clat percentiles (msec): 00:28:06.592 | 1.00th=[ 60], 5.00th=[ 60], 10.00th=[ 2123], 20.00th=[ 2165], 00:28:06.592 | 30.00th=[ 2198], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 4329], 00:28:06.592 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[10134], 00:28:06.592 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:28:06.592 | 99.99th=[10134] 00:28:06.592 lat (msec) : 100=5.00%, >=2000=95.00% 00:28:06.592 cpu : usr=0.00%, sys=0.17%, ctx=56, majf=0, minf=5121 00:28:06.592 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:28:06.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.592 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:06.592 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.592 job1: (groupid=0, jobs=1): err= 0: pid=3652646: Sat Jul 13 21:14:55 2024 00:28:06.592 read: IOPS=124, BW=125MiB/s (131MB/s)(1296MiB/10400msec) 00:28:06.592 slat (usec): min=43, max=2081.1k, avg=7977.04, stdev=88483.65 00:28:06.592 clat (msec): min=51, max=6644, avg=978.21, stdev=1737.36 00:28:06.592 lat (msec): min=258, max=6646, avg=986.19, stdev=1743.53 00:28:06.592 clat percentiles (msec): 00:28:06.592 | 1.00th=[ 259], 5.00th=[ 259], 10.00th=[ 262], 20.00th=[ 262], 00:28:06.592 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 268], 60.00th=[ 275], 00:28:06.592 | 70.00th=[ 659], 80.00th=[ 793], 90.00th=[ 2366], 95.00th=[ 6477], 00:28:06.592 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6678], 00:28:06.592 | 99.99th=[ 6678] 00:28:06.592 bw ( KiB/s): min= 8192, max=494626, per=5.43%, avg=217290.36, stdev=205240.90, samples=11 00:28:06.592 iops : min= 8, max= 483, avg=212.09, stdev=200.48, samples=11 00:28:06.592 lat (msec) : 100=0.08%, 500=66.28%, 750=6.02%, 1000=16.20%, 2000=0.77% 00:28:06.592 lat (msec) : >=2000=10.65% 00:28:06.592 cpu : usr=0.07%, sys=2.49%, ctx=1236, majf=0, minf=32769 00:28:06.592 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:28:06.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.592 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.592 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.592 job1: (groupid=0, jobs=1): err= 0: pid=3652647: Sat Jul 13 21:14:55 2024 00:28:06.592 read: IOPS=2, BW=2787KiB/s (2854kB/s)(28.0MiB/10287msec) 00:28:06.592 slat (usec): min=688, max=2139.2k, avg=364222.44, stdev=765900.53 00:28:06.592 clat (msec): min=88, max=10285, avg=6418.83, stdev=3238.89 00:28:06.592 lat (msec): min=2033, max=10286, avg=6783.05, stdev=3069.61 00:28:06.592 clat percentiles (msec): 00:28:06.592 | 1.00th=[ 89], 5.00th=[ 2039], 10.00th=[ 2106], 20.00th=[ 4279], 00:28:06.592 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:28:06.592 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:28:06.592 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:28:06.592 | 99.99th=[10268] 00:28:06.592 lat (msec) : 100=3.57%, >=2000=96.43% 00:28:06.592 cpu : usr=0.00%, sys=0.17%, ctx=76, majf=0, minf=7169 00:28:06.592 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:28:06.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.592 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:06.592 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.592 job1: (groupid=0, jobs=1): err= 0: pid=3652648: Sat Jul 13 21:14:55 2024 00:28:06.592 read: IOPS=6, BW=6569KiB/s (6727kB/s)(67.0MiB/10444msec) 00:28:06.592 slat (usec): min=686, max=2134.5k, avg=154555.65, stdev=529938.28 00:28:06.592 clat (msec): min=88, max=10442, avg=9104.91, stdev=2544.98 00:28:06.592 lat (msec): min=2126, max=10443, avg=9259.47, stdev=2290.86 00:28:06.592 clat percentiles (msec): 00:28:06.592 | 1.00th=[ 89], 5.00th=[ 2165], 10.00th=[ 6409], 20.00th=[ 8557], 00:28:06.592 | 30.00th=[10268], 40.00th=[10268], 50.00th=[10402], 60.00th=[10402], 00:28:06.592 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:28:06.592 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.592 | 99.99th=[10402] 00:28:06.592 lat (msec) : 100=1.49%, >=2000=98.51% 00:28:06.592 cpu : usr=0.00%, sys=0.66%, ctx=102, majf=0, minf=17153 00:28:06.592 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:28:06.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.592 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.592 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.592 job1: (groupid=0, jobs=1): err= 0: pid=3652649: Sat Jul 13 21:14:55 2024 00:28:06.592 read: IOPS=24, BW=24.3MiB/s (25.5MB/s)(251MiB/10322msec) 00:28:06.592 slat (usec): min=431, max=2054.4k, avg=40815.01, stdev=213779.00 00:28:06.592 clat (msec): min=76, max=8449, avg=4765.32, stdev=1446.13 00:28:06.592 lat (msec): min=1900, max=8525, avg=4806.14, stdev=1435.09 00:28:06.592 clat percentiles (msec): 00:28:06.592 | 1.00th=[ 1905], 5.00th=[ 2005], 10.00th=[ 3675], 20.00th=[ 3809], 00:28:06.592 | 30.00th=[ 3943], 40.00th=[ 4144], 50.00th=[ 5000], 60.00th=[ 5134], 00:28:06.592 | 70.00th=[ 5336], 80.00th=[ 5537], 90.00th=[ 6409], 95.00th=[ 8221], 00:28:06.592 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:28:06.592 | 99.99th=[ 8423] 00:28:06.592 bw ( KiB/s): min= 4096, max=126976, per=0.79%, avg=31484.38, stdev=40696.66, samples=8 00:28:06.592 iops : min= 4, max= 124, avg=30.63, stdev=39.80, samples=8 00:28:06.592 lat (msec) : 100=0.40%, 2000=4.38%, >=2000=95.22% 00:28:06.592 cpu : usr=0.00%, sys=0.98%, ctx=512, majf=0, minf=32769 00:28:06.592 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.7%, >=64=74.9% 00:28:06.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.592 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:28:06.592 issued rwts: total=251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.592 job1: (groupid=0, jobs=1): err= 0: pid=3652650: Sat Jul 13 21:14:55 2024 00:28:06.592 read: IOPS=75, BW=75.1MiB/s (78.7MB/s)(754MiB/10043msec) 00:28:06.592 slat (usec): min=43, max=2117.7k, avg=13287.25, stdev=78317.41 00:28:06.592 clat (msec): min=19, max=3949, avg=1627.86, stdev=980.21 00:28:06.592 lat (msec): min=45, max=3962, avg=1641.15, stdev=982.37 00:28:06.592 clat percentiles (msec): 00:28:06.592 | 1.00th=[ 138], 5.00th=[ 776], 10.00th=[ 776], 20.00th=[ 785], 00:28:06.592 | 30.00th=[ 818], 40.00th=[ 995], 50.00th=[ 1452], 60.00th=[ 1720], 00:28:06.592 | 70.00th=[ 1905], 80.00th=[ 2198], 90.00th=[ 3406], 95.00th=[ 3708], 00:28:06.592 | 99.00th=[ 3910], 99.50th=[ 3910], 99.90th=[ 3943], 99.95th=[ 3943], 00:28:06.592 | 99.99th=[ 3943] 00:28:06.592 bw ( KiB/s): min=26624, max=174080, per=2.00%, avg=80180.75, stdev=50050.80, samples=16 00:28:06.592 iops : min= 26, max= 170, avg=78.19, stdev=48.93, samples=16 00:28:06.592 lat (msec) : 20=0.13%, 50=0.27%, 100=0.40%, 250=0.93%, 500=1.06% 00:28:06.592 lat (msec) : 750=1.19%, 1000=36.47%, 2000=34.88%, >=2000=24.67% 00:28:06.592 cpu : usr=0.03%, sys=1.46%, ctx=1149, majf=0, minf=32769 00:28:06.592 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.6% 00:28:06.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.592 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.592 issued rwts: total=754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.592 job1: (groupid=0, jobs=1): err= 0: pid=3652651: Sat Jul 13 21:14:55 2024 00:28:06.592 read: IOPS=5, BW=5217KiB/s (5343kB/s)(53.0MiB/10402msec) 00:28:06.592 slat (usec): min=881, max=2106.5k, avg=194917.48, stdev=582409.79 00:28:06.592 clat (msec): min=70, max=10396, avg=8273.82, stdev=2993.39 00:28:06.592 lat (msec): min=2121, max=10401, avg=8468.74, stdev=2777.50 00:28:06.592 clat percentiles (msec): 00:28:06.592 | 1.00th=[ 71], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 6477], 00:28:06.592 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[10268], 60.00th=[10268], 00:28:06.593 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:28:06.593 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.593 | 99.99th=[10402] 00:28:06.593 lat (msec) : 100=1.89%, >=2000=98.11% 00:28:06.593 cpu : usr=0.00%, sys=0.55%, ctx=95, majf=0, minf=13569 00:28:06.593 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:28:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.593 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.593 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.593 job1: (groupid=0, jobs=1): err= 0: pid=3652652: Sat Jul 13 21:14:55 2024 00:28:06.593 read: IOPS=12, BW=12.3MiB/s (12.9MB/s)(126MiB/10245msec) 00:28:06.593 slat (usec): min=602, max=2076.9k, avg=79372.46, stdev=344468.64 00:28:06.593 clat (msec): min=243, max=10238, avg=1745.38, stdev=1928.61 00:28:06.593 lat (msec): min=245, max=10244, avg=1824.75, stdev=2067.11 00:28:06.593 clat percentiles (msec): 00:28:06.593 | 1.00th=[ 245], 5.00th=[ 292], 10.00th=[ 372], 20.00th=[ 567], 00:28:06.593 | 30.00th=[ 776], 40.00th=[ 1020], 50.00th=[ 1217], 60.00th=[ 1435], 00:28:06.593 | 70.00th=[ 1670], 80.00th=[ 1888], 90.00th=[ 4279], 95.00th=[ 6477], 00:28:06.593 | 99.00th=[ 8658], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:28:06.593 | 99.99th=[10268] 00:28:06.593 lat (msec) : 250=2.38%, 500=14.29%, 750=10.32%, 1000=12.70%, 2000=42.06% 00:28:06.593 lat (msec) : >=2000=18.25% 00:28:06.593 cpu : usr=0.01%, sys=0.61%, ctx=335, majf=0, minf=32257 00:28:06.593 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.3%, 16=12.7%, 32=25.4%, >=64=50.0% 00:28:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.593 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.593 issued rwts: total=126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.593 job1: (groupid=0, jobs=1): err= 0: pid=3652653: Sat Jul 13 21:14:55 2024 00:28:06.593 read: IOPS=19, BW=19.7MiB/s (20.6MB/s)(205MiB/10428msec) 00:28:06.593 slat (usec): min=735, max=2088.1k, avg=50498.99, stdev=267829.48 00:28:06.593 clat (msec): min=74, max=8572, avg=4589.15, stdev=1431.79 00:28:06.593 lat (msec): min=1677, max=8609, avg=4639.65, stdev=1418.13 00:28:06.593 clat percentiles (msec): 00:28:06.593 | 1.00th=[ 1670], 5.00th=[ 1720], 10.00th=[ 1804], 20.00th=[ 3876], 00:28:06.593 | 30.00th=[ 4463], 40.00th=[ 4732], 50.00th=[ 5067], 60.00th=[ 5269], 00:28:06.593 | 70.00th=[ 5470], 80.00th=[ 5604], 90.00th=[ 5671], 95.00th=[ 5805], 00:28:06.593 | 99.00th=[ 6477], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:28:06.593 | 99.99th=[ 8557] 00:28:06.593 bw ( KiB/s): min=12312, max=75776, per=0.98%, avg=39409.50, stdev=26921.16, samples=4 00:28:06.593 iops : min= 12, max= 74, avg=38.25, stdev=26.29, samples=4 00:28:06.593 lat (msec) : 100=0.49%, 2000=13.66%, >=2000=85.85% 00:28:06.593 cpu : usr=0.00%, sys=0.97%, ctx=402, majf=0, minf=32769 00:28:06.593 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.6%, >=64=69.3% 00:28:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.593 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:28:06.593 issued rwts: total=205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.593 job1: (groupid=0, jobs=1): err= 0: pid=3652654: Sat Jul 13 21:14:55 2024 00:28:06.593 read: IOPS=4, BW=4571KiB/s (4681kB/s)(46.0MiB/10304msec) 00:28:06.593 slat (usec): min=807, max=2092.7k, avg=222788.92, stdev=621086.96 00:28:06.593 clat (msec): min=54, max=10302, avg=8066.35, stdev=2827.86 00:28:06.593 lat (msec): min=2112, max=10303, avg=8289.13, stdev=2575.05 00:28:06.593 clat percentiles (msec): 00:28:06.593 | 1.00th=[ 55], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 6409], 00:28:06.593 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10134], 00:28:06.593 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:28:06.593 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:28:06.593 | 99.99th=[10268] 00:28:06.593 lat (msec) : 100=2.17%, >=2000=97.83% 00:28:06.593 cpu : usr=0.00%, sys=0.38%, ctx=73, majf=0, minf=11777 00:28:06.593 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:28:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.593 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.593 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.593 job1: (groupid=0, jobs=1): err= 0: pid=3652655: Sat Jul 13 21:14:55 2024 00:28:06.593 read: IOPS=4, BW=4272KiB/s (4375kB/s)(43.0MiB/10306msec) 00:28:06.593 slat (usec): min=899, max=2095.9k, avg=238213.56, stdev=640332.73 00:28:06.593 clat (msec): min=61, max=10223, avg=5041.73, stdev=2828.17 00:28:06.593 lat (msec): min=2107, max=10305, avg=5279.94, stdev=2830.11 00:28:06.593 clat percentiles (msec): 00:28:06.593 | 1.00th=[ 62], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2165], 00:28:06.593 | 30.00th=[ 2198], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 4329], 00:28:06.593 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[10268], 00:28:06.593 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:28:06.593 | 99.99th=[10268] 00:28:06.593 lat (msec) : 100=2.33%, >=2000=97.67% 00:28:06.593 cpu : usr=0.00%, sys=0.38%, ctx=65, majf=0, minf=11009 00:28:06.593 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:28:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.593 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.593 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.593 job1: (groupid=0, jobs=1): err= 0: pid=3652657: Sat Jul 13 21:14:55 2024 00:28:06.593 read: IOPS=47, BW=47.5MiB/s (49.8MB/s)(489MiB/10291msec) 00:28:06.593 slat (usec): min=53, max=2074.9k, avg=20457.18, stdev=97240.68 00:28:06.593 clat (msec): min=283, max=4895, avg=2326.32, stdev=1258.67 00:28:06.593 lat (msec): min=293, max=4898, avg=2346.78, stdev=1258.58 00:28:06.593 clat percentiles (msec): 00:28:06.593 | 1.00th=[ 355], 5.00th=[ 667], 10.00th=[ 1003], 20.00th=[ 1401], 00:28:06.593 | 30.00th=[ 1670], 40.00th=[ 1821], 50.00th=[ 1955], 60.00th=[ 2039], 00:28:06.593 | 70.00th=[ 2265], 80.00th=[ 3876], 90.00th=[ 4530], 95.00th=[ 4799], 00:28:06.593 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:28:06.593 | 99.99th=[ 4866] 00:28:06.593 bw ( KiB/s): min= 2048, max=135168, per=1.42%, avg=56972.85, stdev=38841.32, samples=13 00:28:06.593 iops : min= 2, max= 132, avg=55.62, stdev=37.93, samples=13 00:28:06.593 lat (msec) : 500=3.07%, 750=2.66%, 1000=3.27%, 2000=45.40%, >=2000=45.60% 00:28:06.593 cpu : usr=0.05%, sys=1.23%, ctx=1065, majf=0, minf=32769 00:28:06.593 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.1% 00:28:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.593 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.593 issued rwts: total=489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.593 job1: (groupid=0, jobs=1): err= 0: pid=3652658: Sat Jul 13 21:14:55 2024 00:28:06.593 read: IOPS=1, BW=1995KiB/s (2043kB/s)(20.0MiB/10266msec) 00:28:06.593 slat (msec): min=2, max=2092, avg=510.56, stdev=879.66 00:28:06.593 clat (msec): min=54, max=8618, avg=5804.09, stdev=2703.65 00:28:06.593 lat (msec): min=2118, max=10265, avg=6314.65, stdev=2518.61 00:28:06.593 clat percentiles (msec): 00:28:06.593 | 1.00th=[ 55], 5.00th=[ 55], 10.00th=[ 2123], 20.00th=[ 2198], 00:28:06.593 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:28:06.593 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[ 8658], 95.00th=[ 8658], 00:28:06.593 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:28:06.593 | 99.99th=[ 8658] 00:28:06.593 lat (msec) : 100=5.00%, >=2000=95.00% 00:28:06.593 cpu : usr=0.00%, sys=0.14%, ctx=60, majf=0, minf=5121 00:28:06.593 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:28:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.593 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:06.593 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.593 job1: (groupid=0, jobs=1): err= 0: pid=3652659: Sat Jul 13 21:14:55 2024 00:28:06.593 read: IOPS=51, BW=51.1MiB/s (53.6MB/s)(526MiB/10289msec) 00:28:06.593 slat (usec): min=42, max=2145.5k, avg=19411.11, stdev=141404.55 00:28:06.593 clat (msec): min=75, max=5996, avg=2118.85, stdev=2118.27 00:28:06.593 lat (msec): min=365, max=5998, avg=2138.26, stdev=2120.18 00:28:06.593 clat percentiles (msec): 00:28:06.593 | 1.00th=[ 368], 5.00th=[ 368], 10.00th=[ 368], 20.00th=[ 397], 00:28:06.593 | 30.00th=[ 531], 40.00th=[ 885], 50.00th=[ 1250], 60.00th=[ 1452], 00:28:06.593 | 70.00th=[ 1972], 80.00th=[ 5671], 90.00th=[ 5805], 95.00th=[ 5940], 00:28:06.593 | 99.00th=[ 5940], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:28:06.593 | 99.99th=[ 6007] 00:28:06.593 bw ( KiB/s): min= 2048, max=342016, per=2.54%, avg=101888.00, stdev=114336.11, samples=8 00:28:06.593 iops : min= 2, max= 334, avg=99.50, stdev=111.66, samples=8 00:28:06.593 lat (msec) : 100=0.19%, 500=26.81%, 750=9.89%, 1000=5.51%, 2000=27.76% 00:28:06.593 lat (msec) : >=2000=29.85% 00:28:06.593 cpu : usr=0.11%, sys=1.05%, ctx=845, majf=0, minf=32769 00:28:06.593 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:28:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.593 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.593 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.593 job2: (groupid=0, jobs=1): err= 0: pid=3652668: Sat Jul 13 21:14:55 2024 00:28:06.593 read: IOPS=38, BW=38.6MiB/s (40.5MB/s)(389MiB/10065msec) 00:28:06.593 slat (usec): min=82, max=1574.9k, avg=25712.44, stdev=85221.91 00:28:06.593 clat (msec): min=60, max=6752, avg=2521.43, stdev=1069.16 00:28:06.593 lat (msec): min=74, max=6765, avg=2547.14, stdev=1080.07 00:28:06.593 clat percentiles (msec): 00:28:06.593 | 1.00th=[ 78], 5.00th=[ 338], 10.00th=[ 835], 20.00th=[ 1989], 00:28:06.593 | 30.00th=[ 2265], 40.00th=[ 2500], 50.00th=[ 2668], 60.00th=[ 2802], 00:28:06.593 | 70.00th=[ 3004], 80.00th=[ 3373], 90.00th=[ 3406], 95.00th=[ 3440], 00:28:06.593 | 99.00th=[ 6678], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:28:06.593 | 99.99th=[ 6745] 00:28:06.593 bw ( KiB/s): min= 4096, max=75776, per=0.96%, avg=38326.86, stdev=22238.69, samples=14 00:28:06.593 iops : min= 4, max= 74, avg=37.43, stdev=21.72, samples=14 00:28:06.593 lat (msec) : 100=2.31%, 250=1.54%, 500=3.08%, 750=2.31%, 1000=1.80% 00:28:06.593 lat (msec) : 2000=9.25%, >=2000=79.69% 00:28:06.593 cpu : usr=0.02%, sys=1.01%, ctx=1193, majf=0, minf=32769 00:28:06.593 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:28:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.593 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.593 issued rwts: total=389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.594 job2: (groupid=0, jobs=1): err= 0: pid=3652669: Sat Jul 13 21:14:55 2024 00:28:06.594 read: IOPS=6, BW=6250KiB/s (6400kB/s)(63.0MiB/10322msec) 00:28:06.594 slat (usec): min=872, max=2053.7k, avg=162297.88, stdev=530974.21 00:28:06.594 clat (msec): min=96, max=10320, avg=6406.31, stdev=3173.08 00:28:06.594 lat (msec): min=2121, max=10321, avg=6568.61, stdev=3105.93 00:28:06.594 clat percentiles (msec): 00:28:06.594 | 1.00th=[ 97], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2232], 00:28:06.594 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 6477], 60.00th=[ 8557], 00:28:06.594 | 70.00th=[ 8658], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:28:06.594 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:28:06.594 | 99.99th=[10268] 00:28:06.594 lat (msec) : 100=1.59%, >=2000=98.41% 00:28:06.594 cpu : usr=0.00%, sys=0.56%, ctx=74, majf=0, minf=16129 00:28:06.594 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:28:06.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.594 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.594 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.594 job2: (groupid=0, jobs=1): err= 0: pid=3652670: Sat Jul 13 21:14:55 2024 00:28:06.594 read: IOPS=51, BW=51.3MiB/s (53.8MB/s)(515MiB/10046msec) 00:28:06.594 slat (usec): min=48, max=2016.1k, avg=19424.91, stdev=89591.69 00:28:06.594 clat (msec): min=39, max=4633, avg=2236.49, stdev=1019.09 00:28:06.594 lat (msec): min=48, max=4641, avg=2255.91, stdev=1021.29 00:28:06.594 clat percentiles (msec): 00:28:06.594 | 1.00th=[ 97], 5.00th=[ 481], 10.00th=[ 894], 20.00th=[ 1351], 00:28:06.594 | 30.00th=[ 1821], 40.00th=[ 2089], 50.00th=[ 2198], 60.00th=[ 2400], 00:28:06.594 | 70.00th=[ 2567], 80.00th=[ 3037], 90.00th=[ 3708], 95.00th=[ 4212], 00:28:06.594 | 99.00th=[ 4463], 99.50th=[ 4530], 99.90th=[ 4665], 99.95th=[ 4665], 00:28:06.594 | 99.99th=[ 4665] 00:28:06.594 bw ( KiB/s): min=22528, max=135168, per=1.53%, avg=61124.92, stdev=31248.46, samples=13 00:28:06.594 iops : min= 22, max= 132, avg=59.69, stdev=30.52, samples=13 00:28:06.594 lat (msec) : 50=0.39%, 100=0.78%, 250=1.75%, 500=2.14%, 750=2.33% 00:28:06.594 lat (msec) : 1000=4.85%, 2000=21.75%, >=2000=66.02% 00:28:06.594 cpu : usr=0.02%, sys=1.26%, ctx=1249, majf=0, minf=32769 00:28:06.594 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.8% 00:28:06.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.594 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.594 issued rwts: total=515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.594 job2: (groupid=0, jobs=1): err= 0: pid=3652671: Sat Jul 13 21:14:55 2024 00:28:06.594 read: IOPS=6, BW=6377KiB/s (6530kB/s)(65.0MiB/10438msec) 00:28:06.594 slat (usec): min=903, max=2096.3k, avg=159138.17, stdev=531451.51 00:28:06.594 clat (msec): min=93, max=10436, avg=8657.27, stdev=2927.07 00:28:06.594 lat (msec): min=2138, max=10437, avg=8816.41, stdev=2728.64 00:28:06.594 clat percentiles (msec): 00:28:06.594 | 1.00th=[ 93], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6409], 00:28:06.594 | 30.00th=[ 8658], 40.00th=[10268], 50.00th=[10402], 60.00th=[10402], 00:28:06.594 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:28:06.594 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.594 | 99.99th=[10402] 00:28:06.594 lat (msec) : 100=1.54%, >=2000=98.46% 00:28:06.594 cpu : usr=0.00%, sys=0.67%, ctx=106, majf=0, minf=16641 00:28:06.594 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:28:06.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.594 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.594 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.594 job2: (groupid=0, jobs=1): err= 0: pid=3652673: Sat Jul 13 21:14:55 2024 00:28:06.594 read: IOPS=4, BW=4473KiB/s (4580kB/s)(45.0MiB/10302msec) 00:28:06.594 slat (usec): min=892, max=2088.1k, avg=227228.18, stdev=626358.33 00:28:06.594 clat (msec): min=75, max=10291, avg=5099.41, stdev=2839.68 00:28:06.594 lat (msec): min=2107, max=10301, avg=5326.64, stdev=2837.67 00:28:06.594 clat percentiles (msec): 00:28:06.594 | 1.00th=[ 77], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2165], 00:28:06.594 | 30.00th=[ 2198], 40.00th=[ 4245], 50.00th=[ 4329], 60.00th=[ 6409], 00:28:06.594 | 70.00th=[ 6477], 80.00th=[ 6477], 90.00th=[10268], 95.00th=[10268], 00:28:06.594 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:28:06.594 | 99.99th=[10268] 00:28:06.594 lat (msec) : 100=2.22%, >=2000=97.78% 00:28:06.594 cpu : usr=0.00%, sys=0.39%, ctx=78, majf=0, minf=11521 00:28:06.594 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:28:06.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.594 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.594 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.594 job2: (groupid=0, jobs=1): err= 0: pid=3652674: Sat Jul 13 21:14:55 2024 00:28:06.594 read: IOPS=35, BW=35.4MiB/s (37.1MB/s)(367MiB/10361msec) 00:28:06.594 slat (usec): min=108, max=1992.5k, avg=27311.38, stdev=146105.02 00:28:06.594 clat (msec): min=334, max=6444, avg=3393.80, stdev=1466.99 00:28:06.594 lat (msec): min=369, max=6444, avg=3421.11, stdev=1464.04 00:28:06.594 clat percentiles (msec): 00:28:06.594 | 1.00th=[ 388], 5.00th=[ 659], 10.00th=[ 927], 20.00th=[ 1586], 00:28:06.594 | 30.00th=[ 2265], 40.00th=[ 4077], 50.00th=[ 4245], 60.00th=[ 4329], 00:28:06.594 | 70.00th=[ 4396], 80.00th=[ 4396], 90.00th=[ 4463], 95.00th=[ 4597], 00:28:06.594 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:28:06.594 | 99.99th=[ 6477] 00:28:06.594 bw ( KiB/s): min=16384, max=129024, per=1.02%, avg=40802.25, stdev=34936.63, samples=12 00:28:06.594 iops : min= 16, max= 126, avg=39.83, stdev=34.12, samples=12 00:28:06.594 lat (msec) : 500=3.00%, 750=3.81%, 1000=3.54%, 2000=17.71%, >=2000=71.93% 00:28:06.594 cpu : usr=0.02%, sys=1.53%, ctx=872, majf=0, minf=32769 00:28:06.594 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.7%, >=64=82.8% 00:28:06.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.594 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.594 issued rwts: total=367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.594 job2: (groupid=0, jobs=1): err= 0: pid=3652675: Sat Jul 13 21:14:55 2024 00:28:06.594 read: IOPS=26, BW=26.2MiB/s (27.5MB/s)(272MiB/10390msec) 00:28:06.594 slat (usec): min=580, max=2032.3k, avg=37854.21, stdev=211108.37 00:28:06.594 clat (msec): min=90, max=8522, avg=4493.82, stdev=1320.17 00:28:06.594 lat (msec): min=1612, max=8546, avg=4531.67, stdev=1311.99 00:28:06.594 clat percentiles (msec): 00:28:06.594 | 1.00th=[ 1620], 5.00th=[ 1720], 10.00th=[ 3742], 20.00th=[ 3775], 00:28:06.594 | 30.00th=[ 3809], 40.00th=[ 3842], 50.00th=[ 3876], 60.00th=[ 5067], 00:28:06.594 | 70.00th=[ 5403], 80.00th=[ 5738], 90.00th=[ 6074], 95.00th=[ 6342], 00:28:06.594 | 99.00th=[ 6477], 99.50th=[ 8423], 99.90th=[ 8490], 99.95th=[ 8490], 00:28:06.594 | 99.99th=[ 8490] 00:28:06.594 bw ( KiB/s): min= 8175, max=79872, per=0.92%, avg=36843.38, stdev=31324.89, samples=8 00:28:06.594 iops : min= 7, max= 78, avg=35.75, stdev=30.58, samples=8 00:28:06.594 lat (msec) : 100=0.37%, 2000=6.99%, >=2000=92.65% 00:28:06.594 cpu : usr=0.00%, sys=1.38%, ctx=527, majf=0, minf=32769 00:28:06.594 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.8%, >=64=76.8% 00:28:06.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.594 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:28:06.594 issued rwts: total=272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.594 job2: (groupid=0, jobs=1): err= 0: pid=3652676: Sat Jul 13 21:14:55 2024 00:28:06.594 read: IOPS=3, BW=3551KiB/s (3636kB/s)(36.0MiB/10381msec) 00:28:06.594 slat (usec): min=843, max=2082.2k, avg=285726.72, stdev=685895.28 00:28:06.594 clat (msec): min=93, max=10378, avg=8054.29, stdev=2705.72 00:28:06.594 lat (msec): min=2174, max=10380, avg=8340.02, stdev=2362.39 00:28:06.594 clat percentiles (msec): 00:28:06.594 | 1.00th=[ 94], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6477], 00:28:06.594 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[ 8658], 00:28:06.594 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:28:06.594 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.594 | 99.99th=[10402] 00:28:06.594 lat (msec) : 100=2.78%, >=2000=97.22% 00:28:06.594 cpu : usr=0.00%, sys=0.31%, ctx=85, majf=0, minf=9217 00:28:06.594 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:28:06.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.594 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.594 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.594 job2: (groupid=0, jobs=1): err= 0: pid=3652677: Sat Jul 13 21:14:55 2024 00:28:06.594 read: IOPS=18, BW=18.5MiB/s (19.4MB/s)(187MiB/10095msec) 00:28:06.594 slat (usec): min=379, max=2074.7k, avg=53533.81, stdev=241495.57 00:28:06.594 clat (msec): min=82, max=9264, avg=3346.51, stdev=2754.88 00:28:06.594 lat (msec): min=131, max=9295, avg=3400.05, stdev=2782.97 00:28:06.594 clat percentiles (msec): 00:28:06.594 | 1.00th=[ 132], 5.00th=[ 241], 10.00th=[ 535], 20.00th=[ 1116], 00:28:06.594 | 30.00th=[ 1921], 40.00th=[ 2433], 50.00th=[ 2635], 60.00th=[ 3037], 00:28:06.594 | 70.00th=[ 3171], 80.00th=[ 5537], 90.00th=[ 8658], 95.00th=[ 8926], 00:28:06.594 | 99.00th=[ 9194], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:28:06.594 | 99.99th=[ 9329] 00:28:06.594 bw ( KiB/s): min=20480, max=53248, per=0.77%, avg=30720.00, stdev=15234.33, samples=4 00:28:06.594 iops : min= 20, max= 52, avg=30.00, stdev=14.88, samples=4 00:28:06.594 lat (msec) : 100=0.53%, 250=4.81%, 500=4.28%, 750=5.35%, 1000=3.74% 00:28:06.594 lat (msec) : 2000=13.37%, >=2000=67.91% 00:28:06.594 cpu : usr=0.00%, sys=1.02%, ctx=638, majf=0, minf=32769 00:28:06.594 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.3%, 16=8.6%, 32=17.1%, >=64=66.3% 00:28:06.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.594 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:28:06.594 issued rwts: total=187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.594 job2: (groupid=0, jobs=1): err= 0: pid=3652678: Sat Jul 13 21:14:55 2024 00:28:06.594 read: IOPS=43, BW=43.2MiB/s (45.3MB/s)(434MiB/10047msec) 00:28:06.594 slat (usec): min=45, max=2122.9k, avg=23051.80, stdev=141720.73 00:28:06.594 clat (msec): min=39, max=7589, avg=2750.48, stdev=2017.53 00:28:06.594 lat (msec): min=47, max=7590, avg=2773.53, stdev=2024.27 00:28:06.594 clat percentiles (msec): 00:28:06.594 | 1.00th=[ 84], 5.00th=[ 584], 10.00th=[ 743], 20.00th=[ 760], 00:28:06.594 | 30.00th=[ 1062], 40.00th=[ 1905], 50.00th=[ 2165], 60.00th=[ 2500], 00:28:06.595 | 70.00th=[ 2903], 80.00th=[ 5403], 90.00th=[ 5738], 95.00th=[ 6007], 00:28:06.595 | 99.00th=[ 7617], 99.50th=[ 7617], 99.90th=[ 7617], 99.95th=[ 7617], 00:28:06.595 | 99.99th=[ 7617] 00:28:06.595 bw ( KiB/s): min= 6144, max=139264, per=1.30%, avg=52243.58, stdev=35438.79, samples=12 00:28:06.595 iops : min= 6, max= 136, avg=51.00, stdev=34.61, samples=12 00:28:06.595 lat (msec) : 50=0.46%, 100=0.92%, 250=1.15%, 500=2.30%, 750=5.53% 00:28:06.595 lat (msec) : 1000=18.20%, 2000=14.52%, >=2000=56.91% 00:28:06.595 cpu : usr=0.04%, sys=1.23%, ctx=855, majf=0, minf=32769 00:28:06.595 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:28:06.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.595 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.595 issued rwts: total=434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.595 job2: (groupid=0, jobs=1): err= 0: pid=3652679: Sat Jul 13 21:14:55 2024 00:28:06.595 read: IOPS=94, BW=94.4MiB/s (98.9MB/s)(951MiB/10078msec) 00:28:06.595 slat (usec): min=39, max=2010.7k, avg=10507.24, stdev=90542.51 00:28:06.595 clat (msec): min=77, max=4863, avg=1304.40, stdev=1338.89 00:28:06.595 lat (msec): min=89, max=4870, avg=1314.90, stdev=1343.65 00:28:06.595 clat percentiles (msec): 00:28:06.595 | 1.00th=[ 117], 5.00th=[ 321], 10.00th=[ 609], 20.00th=[ 776], 00:28:06.595 | 30.00th=[ 802], 40.00th=[ 810], 50.00th=[ 852], 60.00th=[ 860], 00:28:06.595 | 70.00th=[ 869], 80.00th=[ 885], 90.00th=[ 4732], 95.00th=[ 4799], 00:28:06.595 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:28:06.595 | 99.99th=[ 4866] 00:28:06.595 bw ( KiB/s): min= 8192, max=188416, per=3.24%, avg=129836.00, stdev=56784.47, samples=13 00:28:06.595 iops : min= 8, max= 184, avg=126.77, stdev=55.44, samples=13 00:28:06.595 lat (msec) : 100=0.42%, 250=3.26%, 500=4.63%, 750=8.20%, 1000=68.87% 00:28:06.595 lat (msec) : >=2000=14.62% 00:28:06.595 cpu : usr=0.07%, sys=2.44%, ctx=859, majf=0, minf=32769 00:28:06.595 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:28:06.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.595 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.595 issued rwts: total=951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.595 job2: (groupid=0, jobs=1): err= 0: pid=3652680: Sat Jul 13 21:14:55 2024 00:28:06.595 read: IOPS=38, BW=38.1MiB/s (39.9MB/s)(384MiB/10091msec) 00:28:06.595 slat (usec): min=440, max=1939.9k, avg=26073.33, stdev=112654.16 00:28:06.595 clat (msec): min=76, max=6414, avg=3102.45, stdev=1865.22 00:28:06.595 lat (msec): min=98, max=6425, avg=3128.52, stdev=1870.26 00:28:06.595 clat percentiles (msec): 00:28:06.595 | 1.00th=[ 136], 5.00th=[ 418], 10.00th=[ 944], 20.00th=[ 1670], 00:28:06.595 | 30.00th=[ 1838], 40.00th=[ 1955], 50.00th=[ 2366], 60.00th=[ 3239], 00:28:06.595 | 70.00th=[ 4799], 80.00th=[ 5403], 90.00th=[ 5805], 95.00th=[ 6074], 00:28:06.595 | 99.00th=[ 6342], 99.50th=[ 6409], 99.90th=[ 6409], 99.95th=[ 6409], 00:28:06.595 | 99.99th=[ 6409] 00:28:06.595 bw ( KiB/s): min=10240, max=79872, per=1.01%, avg=40329.85, stdev=20383.94, samples=13 00:28:06.595 iops : min= 10, max= 78, avg=39.38, stdev=19.91, samples=13 00:28:06.595 lat (msec) : 100=0.52%, 250=1.56%, 500=3.65%, 750=2.34%, 1000=2.86% 00:28:06.595 lat (msec) : 2000=31.77%, >=2000=57.29% 00:28:06.595 cpu : usr=0.02%, sys=1.29%, ctx=1075, majf=0, minf=32769 00:28:06.595 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:28:06.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.595 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.595 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.595 job2: (groupid=0, jobs=1): err= 0: pid=3652681: Sat Jul 13 21:14:55 2024 00:28:06.595 read: IOPS=32, BW=32.3MiB/s (33.8MB/s)(330MiB/10225msec) 00:28:06.595 slat (usec): min=448, max=2017.0k, avg=30304.14, stdev=156452.24 00:28:06.595 clat (msec): min=222, max=8121, avg=3214.09, stdev=2417.01 00:28:06.595 lat (msec): min=224, max=8122, avg=3244.40, stdev=2424.42 00:28:06.595 clat percentiles (msec): 00:28:06.595 | 1.00th=[ 230], 5.00th=[ 468], 10.00th=[ 726], 20.00th=[ 1284], 00:28:06.595 | 30.00th=[ 1770], 40.00th=[ 2089], 50.00th=[ 2198], 60.00th=[ 2232], 00:28:06.595 | 70.00th=[ 4463], 80.00th=[ 6678], 90.00th=[ 6946], 95.00th=[ 7013], 00:28:06.595 | 99.00th=[ 7080], 99.50th=[ 8087], 99.90th=[ 8154], 99.95th=[ 8154], 00:28:06.595 | 99.99th=[ 8154] 00:28:06.595 bw ( KiB/s): min=47104, max=69632, per=1.48%, avg=59226.29, stdev=7795.77, samples=7 00:28:06.595 iops : min= 46, max= 68, avg=57.57, stdev= 7.44, samples=7 00:28:06.595 lat (msec) : 250=1.82%, 500=4.55%, 750=4.24%, 1000=3.64%, 2000=20.91% 00:28:06.595 lat (msec) : >=2000=64.85% 00:28:06.595 cpu : usr=0.00%, sys=0.99%, ctx=819, majf=0, minf=32769 00:28:06.595 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.7%, >=64=80.9% 00:28:06.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.595 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:28:06.595 issued rwts: total=330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.595 job3: (groupid=0, jobs=1): err= 0: pid=3652686: Sat Jul 13 21:14:55 2024 00:28:06.595 read: IOPS=56, BW=56.6MiB/s (59.4MB/s)(569MiB/10047msec) 00:28:06.595 slat (usec): min=34, max=1997.8k, avg=17571.55, stdev=84097.32 00:28:06.595 clat (msec): min=45, max=4082, avg=2021.11, stdev=1018.23 00:28:06.595 lat (msec): min=48, max=4101, avg=2038.68, stdev=1019.21 00:28:06.595 clat percentiles (msec): 00:28:06.595 | 1.00th=[ 65], 5.00th=[ 567], 10.00th=[ 911], 20.00th=[ 1301], 00:28:06.595 | 30.00th=[ 1536], 40.00th=[ 1737], 50.00th=[ 1871], 60.00th=[ 1921], 00:28:06.595 | 70.00th=[ 2039], 80.00th=[ 3272], 90.00th=[ 3809], 95.00th=[ 4010], 00:28:06.595 | 99.00th=[ 4077], 99.50th=[ 4077], 99.90th=[ 4077], 99.95th=[ 4077], 00:28:06.595 | 99.99th=[ 4077] 00:28:06.595 bw ( KiB/s): min= 2048, max=157696, per=1.51%, avg=60299.67, stdev=39459.59, samples=15 00:28:06.595 iops : min= 2, max= 154, avg=58.73, stdev=38.59, samples=15 00:28:06.595 lat (msec) : 50=0.35%, 100=1.58%, 250=1.23%, 500=1.23%, 750=2.28% 00:28:06.595 lat (msec) : 1000=4.92%, 2000=55.71%, >=2000=32.69% 00:28:06.595 cpu : usr=0.01%, sys=1.31%, ctx=2235, majf=0, minf=32769 00:28:06.595 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=88.9% 00:28:06.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.595 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.595 issued rwts: total=569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.595 job3: (groupid=0, jobs=1): err= 0: pid=3652687: Sat Jul 13 21:14:55 2024 00:28:06.595 read: IOPS=103, BW=103MiB/s (109MB/s)(1041MiB/10058msec) 00:28:06.595 slat (usec): min=44, max=2048.5k, avg=9614.43, stdev=89880.25 00:28:06.595 clat (msec): min=43, max=5191, avg=1196.17, stdev=1467.55 00:28:06.595 lat (msec): min=67, max=5194, avg=1205.79, stdev=1472.16 00:28:06.595 clat percentiles (msec): 00:28:06.595 | 1.00th=[ 271], 5.00th=[ 384], 10.00th=[ 388], 20.00th=[ 393], 00:28:06.595 | 30.00th=[ 397], 40.00th=[ 414], 50.00th=[ 575], 60.00th=[ 693], 00:28:06.595 | 70.00th=[ 919], 80.00th=[ 1401], 90.00th=[ 5000], 95.00th=[ 5134], 00:28:06.595 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5201], 99.95th=[ 5201], 00:28:06.595 | 99.99th=[ 5201] 00:28:06.595 bw ( KiB/s): min= 6144, max=335872, per=3.34%, avg=133658.29, stdev=115642.59, samples=14 00:28:06.595 iops : min= 6, max= 328, avg=130.50, stdev=112.95, samples=14 00:28:06.595 lat (msec) : 50=0.10%, 100=0.19%, 250=0.48%, 500=44.57%, 750=19.60% 00:28:06.595 lat (msec) : 1000=6.72%, 2000=15.18%, >=2000=13.16% 00:28:06.595 cpu : usr=0.05%, sys=1.79%, ctx=2257, majf=0, minf=32769 00:28:06.595 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:28:06.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.595 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.595 issued rwts: total=1041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.595 job3: (groupid=0, jobs=1): err= 0: pid=3652688: Sat Jul 13 21:14:55 2024 00:28:06.595 read: IOPS=78, BW=78.2MiB/s (81.9MB/s)(787MiB/10070msec) 00:28:06.595 slat (usec): min=672, max=106013, avg=12734.14, stdev=13636.54 00:28:06.595 clat (msec): min=44, max=3204, avg=1524.43, stdev=617.32 00:28:06.595 lat (msec): min=81, max=3212, avg=1537.16, stdev=617.32 00:28:06.595 clat percentiles (msec): 00:28:06.595 | 1.00th=[ 222], 5.00th=[ 810], 10.00th=[ 911], 20.00th=[ 1083], 00:28:06.595 | 30.00th=[ 1167], 40.00th=[ 1267], 50.00th=[ 1368], 60.00th=[ 1485], 00:28:06.595 | 70.00th=[ 1720], 80.00th=[ 2039], 90.00th=[ 2467], 95.00th=[ 2903], 00:28:06.595 | 99.00th=[ 3171], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:28:06.595 | 99.99th=[ 3205] 00:28:06.596 bw ( KiB/s): min=22528, max=182272, per=1.98%, avg=79366.71, stdev=43611.13, samples=17 00:28:06.596 iops : min= 22, max= 178, avg=77.41, stdev=42.54, samples=17 00:28:06.596 lat (msec) : 50=0.13%, 100=0.25%, 250=0.64%, 500=1.65%, 750=1.78% 00:28:06.596 lat (msec) : 1000=10.67%, 2000=62.90%, >=2000=21.98% 00:28:06.596 cpu : usr=0.02%, sys=1.99%, ctx=2768, majf=0, minf=32053 00:28:06.596 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:28:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.596 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.596 issued rwts: total=787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.596 job3: (groupid=0, jobs=1): err= 0: pid=3652690: Sat Jul 13 21:14:55 2024 00:28:06.596 read: IOPS=8, BW=8677KiB/s (8885kB/s)(88.0MiB/10385msec) 00:28:06.596 slat (usec): min=732, max=2077.7k, avg=117051.26, stdev=455893.14 00:28:06.596 clat (msec): min=83, max=10381, avg=7268.59, stdev=3236.36 00:28:06.596 lat (msec): min=2111, max=10384, avg=7385.64, stdev=3158.86 00:28:06.596 clat percentiles (msec): 00:28:06.596 | 1.00th=[ 84], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4245], 00:28:06.596 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[10134], 00:28:06.596 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:28:06.596 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.596 | 99.99th=[10402] 00:28:06.596 lat (msec) : 100=1.14%, >=2000=98.86% 00:28:06.596 cpu : usr=0.00%, sys=0.90%, ctx=100, majf=0, minf=22529 00:28:06.596 IO depths : 1=1.1%, 2=2.3%, 4=4.5%, 8=9.1%, 16=18.2%, 32=36.4%, >=64=28.4% 00:28:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.596 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:28:06.596 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.596 job3: (groupid=0, jobs=1): err= 0: pid=3652691: Sat Jul 13 21:14:55 2024 00:28:06.596 read: IOPS=107, BW=107MiB/s (112MB/s)(1078MiB/10073msec) 00:28:06.596 slat (usec): min=43, max=181059, avg=9272.76, stdev=13392.25 00:28:06.596 clat (msec): min=70, max=3181, avg=1090.52, stdev=450.28 00:28:06.596 lat (msec): min=74, max=3282, avg=1099.79, stdev=452.62 00:28:06.596 clat percentiles (msec): 00:28:06.596 | 1.00th=[ 118], 5.00th=[ 275], 10.00th=[ 330], 20.00th=[ 818], 00:28:06.596 | 30.00th=[ 919], 40.00th=[ 1020], 50.00th=[ 1116], 60.00th=[ 1200], 00:28:06.596 | 70.00th=[ 1284], 80.00th=[ 1418], 90.00th=[ 1703], 95.00th=[ 1854], 00:28:06.596 | 99.00th=[ 1989], 99.50th=[ 2005], 99.90th=[ 3171], 99.95th=[ 3171], 00:28:06.596 | 99.99th=[ 3171] 00:28:06.596 bw ( KiB/s): min=16384, max=331776, per=2.70%, avg=108208.33, stdev=69072.99, samples=18 00:28:06.596 iops : min= 16, max= 324, avg=105.67, stdev=67.46, samples=18 00:28:06.596 lat (msec) : 100=0.83%, 250=2.04%, 500=10.39%, 750=6.22%, 1000=19.29% 00:28:06.596 lat (msec) : 2000=60.58%, >=2000=0.65% 00:28:06.596 cpu : usr=0.06%, sys=1.58%, ctx=2790, majf=0, minf=32769 00:28:06.596 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:28:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.596 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.596 issued rwts: total=1078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.596 job3: (groupid=0, jobs=1): err= 0: pid=3652692: Sat Jul 13 21:14:55 2024 00:28:06.596 read: IOPS=69, BW=69.1MiB/s (72.5MB/s)(693MiB/10022msec) 00:28:06.596 slat (usec): min=43, max=2057.5k, avg=14425.45, stdev=123129.58 00:28:06.596 clat (msec): min=20, max=4691, avg=906.87, stdev=659.50 00:28:06.596 lat (msec): min=22, max=6295, avg=921.30, stdev=690.83 00:28:06.596 clat percentiles (msec): 00:28:06.596 | 1.00th=[ 30], 5.00th=[ 186], 10.00th=[ 388], 20.00th=[ 760], 00:28:06.596 | 30.00th=[ 768], 40.00th=[ 768], 50.00th=[ 802], 60.00th=[ 844], 00:28:06.596 | 70.00th=[ 877], 80.00th=[ 902], 90.00th=[ 978], 95.00th=[ 2601], 00:28:06.596 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:28:06.596 | 99.99th=[ 4665] 00:28:06.596 bw ( KiB/s): min=63488, max=179505, per=3.62%, avg=144806.12, stdev=38544.52, samples=8 00:28:06.596 iops : min= 62, max= 175, avg=141.38, stdev=37.60, samples=8 00:28:06.596 lat (msec) : 50=1.73%, 100=1.59%, 250=2.89%, 500=6.49%, 750=5.48% 00:28:06.596 lat (msec) : 1000=74.75%, >=2000=7.07% 00:28:06.596 cpu : usr=0.04%, sys=1.58%, ctx=607, majf=0, minf=32769 00:28:06.596 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:28:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.596 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.596 issued rwts: total=693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.596 job3: (groupid=0, jobs=1): err= 0: pid=3652693: Sat Jul 13 21:14:55 2024 00:28:06.596 read: IOPS=47, BW=47.5MiB/s (49.8MB/s)(477MiB/10047msec) 00:28:06.596 slat (usec): min=47, max=1977.9k, avg=20963.48, stdev=91635.21 00:28:06.596 clat (msec): min=44, max=4608, avg=2423.10, stdev=1308.50 00:28:06.596 lat (msec): min=51, max=4639, avg=2444.06, stdev=1312.16 00:28:06.596 clat percentiles (msec): 00:28:06.596 | 1.00th=[ 77], 5.00th=[ 342], 10.00th=[ 642], 20.00th=[ 1552], 00:28:06.596 | 30.00th=[ 1921], 40.00th=[ 1989], 50.00th=[ 2165], 60.00th=[ 2232], 00:28:06.596 | 70.00th=[ 2299], 80.00th=[ 4329], 90.00th=[ 4463], 95.00th=[ 4530], 00:28:06.596 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:28:06.596 | 99.99th=[ 4597] 00:28:06.596 bw ( KiB/s): min= 6131, max=83968, per=1.37%, avg=55045.92, stdev=22666.24, samples=13 00:28:06.596 iops : min= 5, max= 82, avg=53.54, stdev=22.24, samples=13 00:28:06.596 lat (msec) : 50=0.21%, 100=1.26%, 250=2.31%, 500=3.98%, 750=3.35% 00:28:06.596 lat (msec) : 1000=1.89%, 2000=28.30%, >=2000=58.70% 00:28:06.596 cpu : usr=0.01%, sys=1.35%, ctx=1754, majf=0, minf=32769 00:28:06.596 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.8% 00:28:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.596 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.596 issued rwts: total=477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.596 job3: (groupid=0, jobs=1): err= 0: pid=3652694: Sat Jul 13 21:14:55 2024 00:28:06.596 read: IOPS=94, BW=94.2MiB/s (98.7MB/s)(949MiB/10077msec) 00:28:06.596 slat (usec): min=45, max=2013.7k, avg=10546.67, stdev=85092.25 00:28:06.596 clat (msec): min=61, max=6581, avg=1130.44, stdev=838.00 00:28:06.596 lat (msec): min=77, max=6588, avg=1140.99, stdev=849.07 00:28:06.596 clat percentiles (msec): 00:28:06.596 | 1.00th=[ 100], 5.00th=[ 296], 10.00th=[ 592], 20.00th=[ 726], 00:28:06.596 | 30.00th=[ 760], 40.00th=[ 844], 50.00th=[ 860], 60.00th=[ 877], 00:28:06.596 | 70.00th=[ 885], 80.00th=[ 919], 90.00th=[ 2802], 95.00th=[ 2836], 00:28:06.596 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 6611], 99.95th=[ 6611], 00:28:06.596 | 99.99th=[ 6611] 00:28:06.596 bw ( KiB/s): min=12263, max=188416, per=3.50%, avg=140043.92, stdev=44286.60, samples=12 00:28:06.596 iops : min= 11, max= 184, avg=136.50, stdev=43.50, samples=12 00:28:06.596 lat (msec) : 100=1.05%, 250=2.95%, 500=4.43%, 750=19.60%, 1000=53.00% 00:28:06.596 lat (msec) : 2000=1.58%, >=2000=17.39% 00:28:06.596 cpu : usr=0.04%, sys=1.96%, ctx=950, majf=0, minf=32769 00:28:06.596 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:28:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.596 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.596 issued rwts: total=949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.596 job3: (groupid=0, jobs=1): err= 0: pid=3652695: Sat Jul 13 21:14:55 2024 00:28:06.596 read: IOPS=50, BW=50.7MiB/s (53.1MB/s)(509MiB/10043msec) 00:28:06.596 slat (usec): min=36, max=1996.5k, avg=19648.19, stdev=89821.39 00:28:06.596 clat (msec): min=39, max=4816, avg=2273.19, stdev=1390.24 00:28:06.596 lat (msec): min=48, max=4834, avg=2292.84, stdev=1393.19 00:28:06.596 clat percentiles (msec): 00:28:06.596 | 1.00th=[ 74], 5.00th=[ 422], 10.00th=[ 978], 20.00th=[ 1368], 00:28:06.596 | 30.00th=[ 1469], 40.00th=[ 1703], 50.00th=[ 1804], 60.00th=[ 1938], 00:28:06.596 | 70.00th=[ 2056], 80.00th=[ 4396], 90.00th=[ 4665], 95.00th=[ 4732], 00:28:06.596 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:28:06.596 | 99.99th=[ 4799] 00:28:06.596 bw ( KiB/s): min= 6144, max=149504, per=1.39%, avg=55686.50, stdev=33311.41, samples=14 00:28:06.596 iops : min= 6, max= 146, avg=54.36, stdev=32.52, samples=14 00:28:06.596 lat (msec) : 50=0.39%, 100=1.38%, 250=1.57%, 500=2.55%, 750=2.16% 00:28:06.596 lat (msec) : 1000=2.55%, 2000=56.19%, >=2000=33.20% 00:28:06.596 cpu : usr=0.08%, sys=1.04%, ctx=1915, majf=0, minf=32769 00:28:06.596 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:28:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.596 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.596 issued rwts: total=509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.596 job3: (groupid=0, jobs=1): err= 0: pid=3652696: Sat Jul 13 21:14:55 2024 00:28:06.596 read: IOPS=114, BW=114MiB/s (120MB/s)(1146MiB/10052msec) 00:28:06.596 slat (usec): min=59, max=2071.4k, avg=8725.63, stdev=77833.12 00:28:06.596 clat (msec): min=46, max=5001, avg=799.26, stdev=964.06 00:28:06.596 lat (msec): min=62, max=5017, avg=807.98, stdev=972.74 00:28:06.596 clat percentiles (msec): 00:28:06.596 | 1.00th=[ 155], 5.00th=[ 180], 10.00th=[ 271], 20.00th=[ 326], 00:28:06.596 | 30.00th=[ 363], 40.00th=[ 380], 50.00th=[ 405], 60.00th=[ 510], 00:28:06.596 | 70.00th=[ 818], 80.00th=[ 953], 90.00th=[ 1636], 95.00th=[ 1938], 00:28:06.596 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:28:06.596 | 99.99th=[ 5000] 00:28:06.596 bw ( KiB/s): min=40960, max=454656, per=4.74%, avg=189693.18, stdev=147996.74, samples=11 00:28:06.596 iops : min= 40, max= 444, avg=185.18, stdev=144.55, samples=11 00:28:06.596 lat (msec) : 50=0.09%, 100=0.44%, 250=8.46%, 500=50.61%, 750=7.24% 00:28:06.596 lat (msec) : 1000=13.87%, 2000=14.83%, >=2000=4.45% 00:28:06.596 cpu : usr=0.07%, sys=1.94%, ctx=2792, majf=0, minf=32769 00:28:06.596 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:28:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.596 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.596 issued rwts: total=1146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.596 job3: (groupid=0, jobs=1): err= 0: pid=3652697: Sat Jul 13 21:14:55 2024 00:28:06.596 read: IOPS=84, BW=84.3MiB/s (88.4MB/s)(847MiB/10045msec) 00:28:06.596 slat (usec): min=56, max=1998.3k, avg=11803.90, stdev=69969.54 00:28:06.596 clat (msec): min=41, max=6429, avg=1443.85, stdev=1099.44 00:28:06.596 lat (msec): min=45, max=6432, avg=1455.66, stdev=1102.03 00:28:06.596 clat percentiles (msec): 00:28:06.596 | 1.00th=[ 74], 5.00th=[ 347], 10.00th=[ 401], 20.00th=[ 701], 00:28:06.596 | 30.00th=[ 751], 40.00th=[ 793], 50.00th=[ 936], 60.00th=[ 1167], 00:28:06.596 | 70.00th=[ 1905], 80.00th=[ 2467], 90.00th=[ 3239], 95.00th=[ 3675], 00:28:06.597 | 99.00th=[ 4010], 99.50th=[ 4044], 99.90th=[ 6409], 99.95th=[ 6409], 00:28:06.597 | 99.99th=[ 6409] 00:28:06.597 bw ( KiB/s): min= 2048, max=274432, per=2.30%, avg=92101.12, stdev=75494.69, samples=16 00:28:06.597 iops : min= 2, max= 268, avg=89.87, stdev=73.74, samples=16 00:28:06.597 lat (msec) : 50=0.47%, 100=1.06%, 250=0.71%, 500=12.28%, 750=15.11% 00:28:06.597 lat (msec) : 1000=24.56%, 2000=16.77%, >=2000=29.04% 00:28:06.597 cpu : usr=0.00%, sys=1.72%, ctx=2392, majf=0, minf=32769 00:28:06.597 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.597 issued rwts: total=847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.597 job3: (groupid=0, jobs=1): err= 0: pid=3652698: Sat Jul 13 21:14:55 2024 00:28:06.597 read: IOPS=57, BW=57.3MiB/s (60.1MB/s)(574MiB/10016msec) 00:28:06.597 slat (usec): min=56, max=2063.7k, avg=17418.07, stdev=110336.76 00:28:06.597 clat (msec): min=15, max=5123, avg=1245.77, stdev=616.24 00:28:06.597 lat (msec): min=16, max=5144, avg=1263.18, stdev=637.03 00:28:06.597 clat percentiles (msec): 00:28:06.597 | 1.00th=[ 31], 5.00th=[ 197], 10.00th=[ 506], 20.00th=[ 642], 00:28:06.597 | 30.00th=[ 927], 40.00th=[ 1150], 50.00th=[ 1284], 60.00th=[ 1418], 00:28:06.597 | 70.00th=[ 1569], 80.00th=[ 1737], 90.00th=[ 1938], 95.00th=[ 2123], 00:28:06.597 | 99.00th=[ 3473], 99.50th=[ 3507], 99.90th=[ 5134], 99.95th=[ 5134], 00:28:06.597 | 99.99th=[ 5134] 00:28:06.597 bw ( KiB/s): min=43008, max=212992, per=2.29%, avg=91545.60, stdev=53370.83, samples=10 00:28:06.597 iops : min= 42, max= 208, avg=89.40, stdev=52.12, samples=10 00:28:06.597 lat (msec) : 20=0.52%, 50=1.74%, 100=1.22%, 250=2.26%, 500=3.83% 00:28:06.597 lat (msec) : 750=13.59%, 1000=9.41%, 2000=58.89%, >=2000=8.54% 00:28:06.597 cpu : usr=0.00%, sys=1.17%, ctx=1736, majf=0, minf=32769 00:28:06.597 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.597 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.597 job3: (groupid=0, jobs=1): err= 0: pid=3652699: Sat Jul 13 21:14:55 2024 00:28:06.597 read: IOPS=57, BW=57.9MiB/s (60.7MB/s)(582MiB/10051msec) 00:28:06.597 slat (usec): min=108, max=1376.2k, avg=17185.24, stdev=59971.68 00:28:06.597 clat (msec): min=46, max=4195, avg=2013.18, stdev=886.91 00:28:06.597 lat (msec): min=52, max=4199, avg=2030.36, stdev=887.45 00:28:06.597 clat percentiles (msec): 00:28:06.597 | 1.00th=[ 91], 5.00th=[ 443], 10.00th=[ 919], 20.00th=[ 1418], 00:28:06.597 | 30.00th=[ 1552], 40.00th=[ 1737], 50.00th=[ 1871], 60.00th=[ 2089], 00:28:06.597 | 70.00th=[ 2366], 80.00th=[ 2802], 90.00th=[ 3473], 95.00th=[ 3507], 00:28:06.597 | 99.00th=[ 3574], 99.50th=[ 3574], 99.90th=[ 4212], 99.95th=[ 4212], 00:28:06.597 | 99.99th=[ 4212] 00:28:06.597 bw ( KiB/s): min=22528, max=124928, per=1.66%, avg=66560.00, stdev=29160.15, samples=14 00:28:06.597 iops : min= 22, max= 122, avg=65.00, stdev=28.48, samples=14 00:28:06.597 lat (msec) : 50=0.17%, 100=0.86%, 250=1.37%, 500=3.61%, 750=2.75% 00:28:06.597 lat (msec) : 1000=2.06%, 2000=47.94%, >=2000=41.24% 00:28:06.597 cpu : usr=0.01%, sys=1.61%, ctx=1924, majf=0, minf=32769 00:28:06.597 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.597 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.597 job4: (groupid=0, jobs=1): err= 0: pid=3652716: Sat Jul 13 21:14:55 2024 00:28:06.597 read: IOPS=67, BW=67.5MiB/s (70.8MB/s)(703MiB/10407msec) 00:28:06.597 slat (usec): min=42, max=2079.9k, avg=14656.17, stdev=147169.57 00:28:06.597 clat (msec): min=100, max=8581, avg=1294.98, stdev=1862.45 00:28:06.597 lat (msec): min=236, max=8619, avg=1309.63, stdev=1879.25 00:28:06.597 clat percentiles (msec): 00:28:06.597 | 1.00th=[ 236], 5.00th=[ 239], 10.00th=[ 239], 20.00th=[ 239], 00:28:06.597 | 30.00th=[ 241], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 296], 00:28:06.597 | 70.00th=[ 527], 80.00th=[ 4530], 90.00th=[ 4732], 95.00th=[ 4799], 00:28:06.597 | 99.00th=[ 4866], 99.50th=[ 6477], 99.90th=[ 8557], 99.95th=[ 8557], 00:28:06.597 | 99.99th=[ 8557] 00:28:06.597 bw ( KiB/s): min= 6144, max=529373, per=5.88%, avg=235308.20, stdev=228416.95, samples=5 00:28:06.597 iops : min= 6, max= 516, avg=229.60, stdev=222.75, samples=5 00:28:06.597 lat (msec) : 250=54.34%, 500=15.08%, 750=7.68%, >=2000=22.90% 00:28:06.597 cpu : usr=0.04%, sys=1.26%, ctx=819, majf=0, minf=32769 00:28:06.597 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.597 issued rwts: total=703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.597 job4: (groupid=0, jobs=1): err= 0: pid=3652717: Sat Jul 13 21:14:55 2024 00:28:06.597 read: IOPS=177, BW=178MiB/s (186MB/s)(1780MiB/10015msec) 00:28:06.597 slat (usec): min=39, max=2088.6k, avg=5615.27, stdev=91987.56 00:28:06.597 clat (msec): min=14, max=8304, avg=194.10, stdev=627.29 00:28:06.597 lat (msec): min=15, max=8326, avg=199.71, stdev=656.23 00:28:06.597 clat percentiles (msec): 00:28:06.597 | 1.00th=[ 31], 5.00th=[ 103], 10.00th=[ 127], 20.00th=[ 128], 00:28:06.597 | 30.00th=[ 128], 40.00th=[ 128], 50.00th=[ 129], 60.00th=[ 129], 00:28:06.597 | 70.00th=[ 130], 80.00th=[ 130], 90.00th=[ 132], 95.00th=[ 134], 00:28:06.597 | 99.00th=[ 2333], 99.50th=[ 6611], 99.90th=[ 8288], 99.95th=[ 8288], 00:28:06.597 | 99.99th=[ 8288] 00:28:06.597 bw ( KiB/s): min=366592, max=1013760, per=21.13%, avg=846336.00, stdev=319883.24, samples=4 00:28:06.597 iops : min= 358, max= 990, avg=826.50, stdev=312.39, samples=4 00:28:06.597 lat (msec) : 20=0.34%, 50=1.69%, 100=2.81%, 250=93.60%, >=2000=1.57% 00:28:06.597 cpu : usr=0.03%, sys=1.79%, ctx=1704, majf=0, minf=32769 00:28:06.597 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.597 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.597 job4: (groupid=0, jobs=1): err= 0: pid=3652718: Sat Jul 13 21:14:55 2024 00:28:06.597 read: IOPS=5, BW=5209KiB/s (5334kB/s)(53.0MiB/10418msec) 00:28:06.597 slat (usec): min=606, max=2108.0k, avg=194919.81, stdev=587175.21 00:28:06.597 clat (msec): min=86, max=10412, avg=9064.54, stdev=2738.20 00:28:06.597 lat (msec): min=2126, max=10417, avg=9259.46, stdev=2438.04 00:28:06.597 clat percentiles (msec): 00:28:06.597 | 1.00th=[ 87], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 8557], 00:28:06.597 | 30.00th=[10268], 40.00th=[10268], 50.00th=[10402], 60.00th=[10402], 00:28:06.597 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:28:06.597 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.597 | 99.99th=[10402] 00:28:06.597 lat (msec) : 100=1.89%, >=2000=98.11% 00:28:06.597 cpu : usr=0.00%, sys=0.54%, ctx=118, majf=0, minf=13569 00:28:06.597 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:28:06.597 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.597 job4: (groupid=0, jobs=1): err= 0: pid=3652719: Sat Jul 13 21:14:55 2024 00:28:06.597 read: IOPS=15, BW=15.2MiB/s (15.9MB/s)(156MiB/10283msec) 00:28:06.597 slat (usec): min=412, max=2055.2k, avg=65268.30, stdev=308778.81 00:28:06.597 clat (msec): min=100, max=6484, avg=5268.41, stdev=998.92 00:28:06.597 lat (msec): min=2136, max=6497, avg=5333.68, stdev=903.95 00:28:06.597 clat percentiles (msec): 00:28:06.597 | 1.00th=[ 2140], 5.00th=[ 3742], 10.00th=[ 4279], 20.00th=[ 4866], 00:28:06.597 | 30.00th=[ 5067], 40.00th=[ 5269], 50.00th=[ 5403], 60.00th=[ 5604], 00:28:06.597 | 70.00th=[ 5873], 80.00th=[ 6007], 90.00th=[ 6275], 95.00th=[ 6409], 00:28:06.597 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:28:06.597 | 99.99th=[ 6477] 00:28:06.597 bw ( KiB/s): min=10240, max=34816, per=0.48%, avg=19114.67, stdev=13636.26, samples=3 00:28:06.597 iops : min= 10, max= 34, avg=18.67, stdev=13.32, samples=3 00:28:06.597 lat (msec) : 250=0.64%, >=2000=99.36% 00:28:06.597 cpu : usr=0.02%, sys=1.05%, ctx=259, majf=0, minf=32769 00:28:06.597 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.1%, 16=10.3%, 32=20.5%, >=64=59.6% 00:28:06.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.597 complete : 0=0.0%, 4=96.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.3% 00:28:06.597 issued rwts: total=156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.597 job4: (groupid=0, jobs=1): err= 0: pid=3652720: Sat Jul 13 21:14:55 2024 00:28:06.597 read: IOPS=73, BW=73.1MiB/s (76.6MB/s)(760MiB/10397msec) 00:28:06.597 slat (usec): min=41, max=2087.2k, avg=13541.19, stdev=97530.52 00:28:06.597 clat (msec): min=98, max=5533, avg=1083.34, stdev=787.03 00:28:06.597 lat (msec): min=690, max=5540, avg=1096.88, stdev=806.20 00:28:06.597 clat percentiles (msec): 00:28:06.597 | 1.00th=[ 693], 5.00th=[ 718], 10.00th=[ 760], 20.00th=[ 768], 00:28:06.597 | 30.00th=[ 776], 40.00th=[ 810], 50.00th=[ 835], 60.00th=[ 902], 00:28:06.597 | 70.00th=[ 944], 80.00th=[ 1217], 90.00th=[ 1536], 95.00th=[ 1687], 00:28:06.597 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:28:06.597 | 99.99th=[ 5537] 00:28:06.598 bw ( KiB/s): min= 8192, max=178176, per=3.23%, avg=129433.60, stdev=63856.16, samples=10 00:28:06.598 iops : min= 8, max= 174, avg=126.40, stdev=62.36, samples=10 00:28:06.598 lat (msec) : 100=0.13%, 750=8.68%, 1000=64.61%, 2000=23.16%, >=2000=3.42% 00:28:06.598 cpu : usr=0.04%, sys=1.90%, ctx=791, majf=0, minf=32769 00:28:06.598 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:28:06.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.598 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.598 issued rwts: total=760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.598 job4: (groupid=0, jobs=1): err= 0: pid=3652722: Sat Jul 13 21:14:55 2024 00:28:06.598 read: IOPS=2, BW=2861KiB/s (2930kB/s)(29.0MiB/10378msec) 00:28:06.598 slat (usec): min=1171, max=2082.8k, avg=354403.88, stdev=755935.31 00:28:06.598 clat (msec): min=99, max=10375, avg=6878.00, stdev=2882.16 00:28:06.598 lat (msec): min=2131, max=10377, avg=7232.41, stdev=2640.71 00:28:06.598 clat percentiles (msec): 00:28:06.598 | 1.00th=[ 101], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:28:06.598 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8557], 00:28:06.598 | 70.00th=[ 8658], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:28:06.598 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.598 | 99.99th=[10402] 00:28:06.598 lat (msec) : 100=3.45%, >=2000=96.55% 00:28:06.598 cpu : usr=0.00%, sys=0.19%, ctx=107, majf=0, minf=7425 00:28:06.598 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:28:06.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.598 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:28:06.598 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.598 job4: (groupid=0, jobs=1): err= 0: pid=3652723: Sat Jul 13 21:14:55 2024 00:28:06.598 read: IOPS=35, BW=35.4MiB/s (37.1MB/s)(364MiB/10283msec) 00:28:06.598 slat (usec): min=54, max=2109.4k, avg=28009.22, stdev=210496.50 00:28:06.598 clat (msec): min=84, max=8944, avg=3374.42, stdev=3613.64 00:28:06.598 lat (msec): min=424, max=8944, avg=3402.43, stdev=3618.62 00:28:06.598 clat percentiles (msec): 00:28:06.598 | 1.00th=[ 426], 5.00th=[ 430], 10.00th=[ 443], 20.00th=[ 460], 00:28:06.598 | 30.00th=[ 489], 40.00th=[ 527], 50.00th=[ 684], 60.00th=[ 2567], 00:28:06.598 | 70.00th=[ 6477], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8926], 00:28:06.598 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:28:06.598 | 99.99th=[ 8926] 00:28:06.598 bw ( KiB/s): min= 4096, max=290816, per=2.01%, avg=80554.67, stdev=108553.02, samples=6 00:28:06.598 iops : min= 4, max= 284, avg=78.67, stdev=106.01, samples=6 00:28:06.598 lat (msec) : 100=0.27%, 500=32.14%, 750=21.15%, 1000=3.02%, >=2000=43.41% 00:28:06.598 cpu : usr=0.07%, sys=1.33%, ctx=427, majf=0, minf=32769 00:28:06.598 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.7% 00:28:06.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.598 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:06.598 issued rwts: total=364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.598 job4: (groupid=0, jobs=1): err= 0: pid=3652724: Sat Jul 13 21:14:55 2024 00:28:06.598 read: IOPS=24, BW=25.0MiB/s (26.2MB/s)(258MiB/10327msec) 00:28:06.598 slat (usec): min=626, max=2091.4k, avg=39692.28, stdev=243704.55 00:28:06.598 clat (msec): min=84, max=10219, avg=4069.25, stdev=3250.40 00:28:06.598 lat (msec): min=744, max=10264, avg=4108.94, stdev=3256.52 00:28:06.598 clat percentiles (msec): 00:28:06.598 | 1.00th=[ 743], 5.00th=[ 751], 10.00th=[ 760], 20.00th=[ 768], 00:28:06.598 | 30.00th=[ 768], 40.00th=[ 768], 50.00th=[ 2567], 60.00th=[ 7080], 00:28:06.598 | 70.00th=[ 7282], 80.00th=[ 7416], 90.00th=[ 7617], 95.00th=[ 7684], 00:28:06.598 | 99.00th=[ 8658], 99.50th=[10134], 99.90th=[10268], 99.95th=[10268], 00:28:06.598 | 99.99th=[10268] 00:28:06.598 bw ( KiB/s): min= 2048, max=143360, per=1.11%, avg=44373.33, stdev=61626.31, samples=6 00:28:06.598 iops : min= 2, max= 140, avg=43.33, stdev=60.18, samples=6 00:28:06.598 lat (msec) : 100=0.39%, 750=3.49%, 1000=40.70%, 2000=1.94%, >=2000=53.49% 00:28:06.598 cpu : usr=0.02%, sys=1.28%, ctx=506, majf=0, minf=32769 00:28:06.598 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.4%, >=64=75.6% 00:28:06.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.598 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:28:06.598 issued rwts: total=258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.598 job4: (groupid=0, jobs=1): err= 0: pid=3652725: Sat Jul 13 21:14:55 2024 00:28:06.598 read: IOPS=14, BW=15.0MiB/s (15.7MB/s)(155MiB/10349msec) 00:28:06.598 slat (usec): min=452, max=2069.5k, avg=66217.49, stdev=326119.90 00:28:06.598 clat (msec): min=83, max=8598, avg=6017.62, stdev=1315.04 00:28:06.598 lat (msec): min=2153, max=8599, avg=6083.84, stdev=1249.26 00:28:06.598 clat percentiles (msec): 00:28:06.598 | 1.00th=[ 2165], 5.00th=[ 4463], 10.00th=[ 4463], 20.00th=[ 5738], 00:28:06.598 | 30.00th=[ 5873], 40.00th=[ 5940], 50.00th=[ 6074], 60.00th=[ 6141], 00:28:06.598 | 70.00th=[ 6208], 80.00th=[ 6342], 90.00th=[ 8490], 95.00th=[ 8557], 00:28:06.598 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:28:06.598 | 99.99th=[ 8658] 00:28:06.598 bw ( KiB/s): min= 4096, max=32768, per=0.35%, avg=13824.00, stdev=13100.26, samples=4 00:28:06.598 iops : min= 4, max= 32, avg=13.50, stdev=12.79, samples=4 00:28:06.598 lat (msec) : 100=0.65%, >=2000=99.35% 00:28:06.598 cpu : usr=0.01%, sys=0.86%, ctx=298, majf=0, minf=32769 00:28:06.598 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.3%, 32=20.6%, >=64=59.4% 00:28:06.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.598 complete : 0=0.0%, 4=96.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.4% 00:28:06.598 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.598 job4: (groupid=0, jobs=1): err= 0: pid=3652726: Sat Jul 13 21:14:55 2024 00:28:06.598 read: IOPS=12, BW=12.5MiB/s (13.1MB/s)(130MiB/10439msec) 00:28:06.598 slat (usec): min=616, max=2084.3k, avg=79533.83, stdev=366563.07 00:28:06.598 clat (msec): min=98, max=10429, avg=9382.44, stdev=1941.04 00:28:06.598 lat (msec): min=2145, max=10431, avg=9461.98, stdev=1761.12 00:28:06.598 clat percentiles (msec): 00:28:06.598 | 1.00th=[ 2140], 5.00th=[ 4329], 10.00th=[ 6477], 20.00th=[ 9731], 00:28:06.598 | 30.00th=[ 9731], 40.00th=[ 9866], 50.00th=[10000], 60.00th=[10134], 00:28:06.598 | 70.00th=[10134], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:28:06.598 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:28:06.598 | 99.99th=[10402] 00:28:06.598 bw ( KiB/s): min= 4096, max= 4096, per=0.10%, avg=4096.00, stdev= 0.00, samples=1 00:28:06.598 iops : min= 4, max= 4, avg= 4.00, stdev= 0.00, samples=1 00:28:06.598 lat (msec) : 100=0.77%, >=2000=99.23% 00:28:06.598 cpu : usr=0.00%, sys=1.09%, ctx=229, majf=0, minf=32769 00:28:06.598 IO depths : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.2%, 16=12.3%, 32=24.6%, >=64=51.5% 00:28:06.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.598 complete : 0=0.0%, 4=75.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=25.0% 00:28:06.598 issued rwts: total=130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.598 job4: (groupid=0, jobs=1): err= 0: pid=3652727: Sat Jul 13 21:14:55 2024 00:28:06.598 read: IOPS=13, BW=13.7MiB/s (14.3MB/s)(142MiB/10378msec) 00:28:06.598 slat (usec): min=132, max=2077.3k, avg=72472.47, stdev=340263.19 00:28:06.598 clat (msec): min=86, max=10177, avg=3439.24, stdev=2837.79 00:28:06.598 lat (msec): min=1470, max=10201, avg=3511.72, stdev=2882.19 00:28:06.598 clat percentiles (msec): 00:28:06.598 | 1.00th=[ 1469], 5.00th=[ 1485], 10.00th=[ 1519], 20.00th=[ 1603], 00:28:06.598 | 30.00th=[ 1703], 40.00th=[ 1804], 50.00th=[ 1921], 60.00th=[ 2022], 00:28:06.598 | 70.00th=[ 2140], 80.00th=[ 6477], 90.00th=[ 8792], 95.00th=[ 8926], 00:28:06.598 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:28:06.598 | 99.99th=[10134] 00:28:06.598 bw ( KiB/s): min=28672, max=28672, per=0.72%, avg=28672.00, stdev= 0.00, samples=1 00:28:06.598 iops : min= 28, max= 28, avg=28.00, stdev= 0.00, samples=1 00:28:06.598 lat (msec) : 100=0.70%, 2000=54.23%, >=2000=45.07% 00:28:06.598 cpu : usr=0.00%, sys=1.00%, ctx=161, majf=0, minf=32769 00:28:06.598 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.3%, 32=22.5%, >=64=55.6% 00:28:06.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.598 complete : 0=0.0%, 4=93.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=6.2% 00:28:06.598 issued rwts: total=142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.598 job4: (groupid=0, jobs=1): err= 0: pid=3652728: Sat Jul 13 21:14:55 2024 00:28:06.598 read: IOPS=31, BW=31.8MiB/s (33.3MB/s)(332MiB/10439msec) 00:28:06.598 slat (usec): min=41, max=2095.4k, avg=31135.47, stdev=213325.17 00:28:06.598 clat (msec): min=99, max=6785, avg=2407.14, stdev=1925.42 00:28:06.598 lat (msec): min=836, max=6791, avg=2438.27, stdev=1944.42 00:28:06.598 clat percentiles (msec): 00:28:06.598 | 1.00th=[ 818], 5.00th=[ 835], 10.00th=[ 844], 20.00th=[ 844], 00:28:06.598 | 30.00th=[ 852], 40.00th=[ 860], 50.00th=[ 2467], 60.00th=[ 2702], 00:28:06.598 | 70.00th=[ 2937], 80.00th=[ 3138], 90.00th=[ 6678], 95.00th=[ 6745], 00:28:06.598 | 99.00th=[ 6745], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:28:06.598 | 99.99th=[ 6812] 00:28:06.598 bw ( KiB/s): min= 6144, max=151552, per=2.61%, avg=104448.00, stdev=67200.20, samples=4 00:28:06.598 iops : min= 6, max= 148, avg=102.00, stdev=65.63, samples=4 00:28:06.598 lat (msec) : 100=0.30%, 1000=46.69%, 2000=0.60%, >=2000=52.41% 00:28:06.598 cpu : usr=0.00%, sys=1.42%, ctx=360, majf=0, minf=32769 00:28:06.598 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.0% 00:28:06.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.598 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:28:06.598 issued rwts: total=332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.598 job4: (groupid=0, jobs=1): err= 0: pid=3652729: Sat Jul 13 21:14:55 2024 00:28:06.598 read: IOPS=43, BW=43.2MiB/s (45.3MB/s)(448MiB/10367msec) 00:28:06.598 slat (usec): min=35, max=2083.1k, avg=22913.64, stdev=183915.65 00:28:06.598 clat (msec): min=98, max=10201, avg=2353.09, stdev=2541.28 00:28:06.598 lat (msec): min=383, max=10209, avg=2376.01, stdev=2555.49 00:28:06.598 clat percentiles (msec): 00:28:06.598 | 1.00th=[ 384], 5.00th=[ 388], 10.00th=[ 405], 20.00th=[ 523], 00:28:06.598 | 30.00th=[ 667], 40.00th=[ 768], 50.00th=[ 776], 60.00th=[ 776], 00:28:06.598 | 70.00th=[ 2567], 80.00th=[ 6544], 90.00th=[ 6678], 95.00th=[ 6745], 00:28:06.598 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[10268], 99.95th=[10268], 00:28:06.598 | 99.99th=[10268] 00:28:06.598 bw ( KiB/s): min= 8192, max=235520, per=3.27%, avg=131013.00, stdev=100892.39, samples=5 00:28:06.598 iops : min= 8, max= 230, avg=127.80, stdev=98.50, samples=5 00:28:06.598 lat (msec) : 100=0.22%, 500=18.08%, 750=18.53%, 1000=27.23%, >=2000=35.94% 00:28:06.598 cpu : usr=0.04%, sys=1.39%, ctx=645, majf=0, minf=32769 00:28:06.599 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.1%, >=64=85.9% 00:28:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.599 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.599 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.599 job5: (groupid=0, jobs=1): err= 0: pid=3652738: Sat Jul 13 21:14:55 2024 00:28:06.599 read: IOPS=71, BW=71.8MiB/s (75.3MB/s)(739MiB/10287msec) 00:28:06.599 slat (usec): min=427, max=2048.4k, avg=13774.47, stdev=105547.66 00:28:06.599 clat (msec): min=104, max=2864, avg=1507.50, stdev=850.99 00:28:06.599 lat (msec): min=280, max=2866, avg=1521.28, stdev=848.46 00:28:06.599 clat percentiles (msec): 00:28:06.599 | 1.00th=[ 296], 5.00th=[ 351], 10.00th=[ 477], 20.00th=[ 718], 00:28:06.599 | 30.00th=[ 877], 40.00th=[ 986], 50.00th=[ 1217], 60.00th=[ 1401], 00:28:06.599 | 70.00th=[ 2366], 80.00th=[ 2534], 90.00th=[ 2635], 95.00th=[ 2735], 00:28:06.599 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2869], 00:28:06.599 | 99.99th=[ 2869] 00:28:06.599 bw ( KiB/s): min=12288, max=333824, per=2.84%, avg=113757.09, stdev=112510.95, samples=11 00:28:06.599 iops : min= 12, max= 326, avg=111.09, stdev=109.87, samples=11 00:28:06.599 lat (msec) : 250=0.14%, 500=10.15%, 750=10.55%, 1000=19.89%, 2000=20.30% 00:28:06.599 lat (msec) : >=2000=38.97% 00:28:06.599 cpu : usr=0.04%, sys=1.08%, ctx=2392, majf=0, minf=32769 00:28:06.599 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:28:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.599 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:06.599 issued rwts: total=739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.599 job5: (groupid=0, jobs=1): err= 0: pid=3652739: Sat Jul 13 21:14:55 2024 00:28:06.599 read: IOPS=89, BW=89.6MiB/s (94.0MB/s)(922MiB/10290msec) 00:28:06.599 slat (usec): min=419, max=2064.3k, avg=11101.51, stdev=86807.45 00:28:06.599 clat (msec): min=50, max=3434, avg=1341.35, stdev=933.88 00:28:06.599 lat (msec): min=432, max=3437, avg=1352.45, stdev=935.62 00:28:06.599 clat percentiles (msec): 00:28:06.599 | 1.00th=[ 435], 5.00th=[ 477], 10.00th=[ 502], 20.00th=[ 523], 00:28:06.599 | 30.00th=[ 550], 40.00th=[ 827], 50.00th=[ 1045], 60.00th=[ 1167], 00:28:06.599 | 70.00th=[ 1284], 80.00th=[ 2635], 90.00th=[ 2903], 95.00th=[ 3071], 00:28:06.599 | 99.00th=[ 3138], 99.50th=[ 3306], 99.90th=[ 3440], 99.95th=[ 3440], 00:28:06.599 | 99.99th=[ 3440] 00:28:06.599 bw ( KiB/s): min= 6144, max=280576, per=2.90%, avg=116150.86, stdev=88982.20, samples=14 00:28:06.599 iops : min= 6, max= 274, avg=113.43, stdev=86.90, samples=14 00:28:06.599 lat (msec) : 100=0.11%, 500=9.33%, 750=28.31%, 1000=10.41%, 2000=24.30% 00:28:06.599 lat (msec) : >=2000=27.55% 00:28:06.599 cpu : usr=0.00%, sys=1.54%, ctx=2949, majf=0, minf=32769 00:28:06.599 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:28:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.599 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.599 issued rwts: total=922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.599 job5: (groupid=0, jobs=1): err= 0: pid=3652740: Sat Jul 13 21:14:55 2024 00:28:06.599 read: IOPS=110, BW=110MiB/s (116MB/s)(1139MiB/10322msec) 00:28:06.599 slat (usec): min=38, max=2043.4k, avg=8982.04, stdev=83840.85 00:28:06.599 clat (msec): min=87, max=3633, avg=1094.42, stdev=1031.02 00:28:06.599 lat (msec): min=329, max=3734, avg=1103.40, stdev=1034.95 00:28:06.599 clat percentiles (msec): 00:28:06.599 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 401], 00:28:06.599 | 30.00th=[ 460], 40.00th=[ 493], 50.00th=[ 535], 60.00th=[ 651], 00:28:06.599 | 70.00th=[ 818], 80.00th=[ 2567], 90.00th=[ 2937], 95.00th=[ 3171], 00:28:06.599 | 99.00th=[ 3574], 99.50th=[ 3574], 99.90th=[ 3608], 99.95th=[ 3641], 00:28:06.599 | 99.99th=[ 3641] 00:28:06.599 bw ( KiB/s): min= 8192, max=374784, per=3.69%, avg=147894.86, stdev=121629.55, samples=14 00:28:06.599 iops : min= 8, max= 366, avg=144.43, stdev=118.78, samples=14 00:28:06.599 lat (msec) : 100=0.09%, 500=43.90%, 750=24.41%, 1000=3.07%, 2000=6.23% 00:28:06.599 lat (msec) : >=2000=22.30% 00:28:06.599 cpu : usr=0.05%, sys=1.55%, ctx=3086, majf=0, minf=32769 00:28:06.599 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:28:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.599 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.599 issued rwts: total=1139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.599 job5: (groupid=0, jobs=1): err= 0: pid=3652741: Sat Jul 13 21:14:55 2024 00:28:06.599 read: IOPS=102, BW=103MiB/s (108MB/s)(1058MiB/10310msec) 00:28:06.599 slat (usec): min=48, max=1953.8k, avg=9700.88, stdev=60711.16 00:28:06.599 clat (msec): min=40, max=3072, avg=1145.39, stdev=681.34 00:28:06.599 lat (msec): min=432, max=3099, avg=1155.09, stdev=682.29 00:28:06.599 clat percentiles (msec): 00:28:06.599 | 1.00th=[ 430], 5.00th=[ 443], 10.00th=[ 542], 20.00th=[ 634], 00:28:06.599 | 30.00th=[ 709], 40.00th=[ 860], 50.00th=[ 927], 60.00th=[ 1011], 00:28:06.599 | 70.00th=[ 1183], 80.00th=[ 1418], 90.00th=[ 2265], 95.00th=[ 2869], 00:28:06.599 | 99.00th=[ 3004], 99.50th=[ 3004], 99.90th=[ 3071], 99.95th=[ 3071], 00:28:06.599 | 99.99th=[ 3071] 00:28:06.599 bw ( KiB/s): min=30720, max=301056, per=3.40%, avg=136017.00, stdev=76086.85, samples=14 00:28:06.599 iops : min= 30, max= 294, avg=132.71, stdev=74.37, samples=14 00:28:06.599 lat (msec) : 50=0.09%, 500=6.62%, 750=26.75%, 1000=24.76%, 2000=28.07% 00:28:06.599 lat (msec) : >=2000=13.71% 00:28:06.599 cpu : usr=0.02%, sys=1.97%, ctx=2946, majf=0, minf=32769 00:28:06.599 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:28:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.599 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.599 issued rwts: total=1058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.599 job5: (groupid=0, jobs=1): err= 0: pid=3652742: Sat Jul 13 21:14:55 2024 00:28:06.599 read: IOPS=106, BW=106MiB/s (111MB/s)(1098MiB/10328msec) 00:28:06.599 slat (usec): min=43, max=2067.9k, avg=9308.43, stdev=107551.57 00:28:06.599 clat (msec): min=103, max=5023, avg=1150.47, stdev=1507.00 00:28:06.599 lat (msec): min=223, max=5027, avg=1159.78, stdev=1511.60 00:28:06.599 clat percentiles (msec): 00:28:06.599 | 1.00th=[ 226], 5.00th=[ 236], 10.00th=[ 264], 20.00th=[ 279], 00:28:06.599 | 30.00th=[ 284], 40.00th=[ 305], 50.00th=[ 330], 60.00th=[ 523], 00:28:06.599 | 70.00th=[ 625], 80.00th=[ 2534], 90.00th=[ 4866], 95.00th=[ 4933], 00:28:06.599 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:28:06.599 | 99.99th=[ 5000] 00:28:06.599 bw ( KiB/s): min= 4096, max=503808, per=4.96%, avg=198656.00, stdev=174525.17, samples=10 00:28:06.599 iops : min= 4, max= 492, avg=194.00, stdev=170.43, samples=10 00:28:06.599 lat (msec) : 250=9.11%, 500=45.90%, 750=20.95%, >=2000=24.04% 00:28:06.599 cpu : usr=0.09%, sys=1.70%, ctx=2169, majf=0, minf=32769 00:28:06.599 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:28:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.599 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.599 issued rwts: total=1098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.599 job5: (groupid=0, jobs=1): err= 0: pid=3652744: Sat Jul 13 21:14:55 2024 00:28:06.599 read: IOPS=263, BW=263MiB/s (276MB/s)(2728MiB/10354msec) 00:28:06.599 slat (usec): min=38, max=2024.4k, avg=3753.37, stdev=55563.63 00:28:06.599 clat (msec): min=103, max=3016, avg=461.66, stdev=746.04 00:28:06.599 lat (msec): min=116, max=3016, avg=465.41, stdev=749.54 00:28:06.599 clat percentiles (msec): 00:28:06.599 | 1.00th=[ 116], 5.00th=[ 117], 10.00th=[ 117], 20.00th=[ 118], 00:28:06.599 | 30.00th=[ 118], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 120], 00:28:06.599 | 70.00th=[ 121], 80.00th=[ 768], 90.00th=[ 1167], 95.00th=[ 2635], 00:28:06.599 | 99.00th=[ 2869], 99.50th=[ 2937], 99.90th=[ 3004], 99.95th=[ 3004], 00:28:06.599 | 99.99th=[ 3004] 00:28:06.599 bw ( KiB/s): min=12288, max=1099776, per=10.23%, avg=409553.08, stdev=465942.04, samples=13 00:28:06.599 iops : min= 12, max= 1074, avg=399.85, stdev=455.09, samples=13 00:28:06.599 lat (msec) : 250=75.88%, 500=0.11%, 750=3.74%, 1000=9.24%, 2000=1.69% 00:28:06.599 lat (msec) : >=2000=9.35% 00:28:06.599 cpu : usr=0.10%, sys=2.72%, ctx=2791, majf=0, minf=32769 00:28:06.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:28:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.600 issued rwts: total=2728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.600 job5: (groupid=0, jobs=1): err= 0: pid=3652745: Sat Jul 13 21:14:55 2024 00:28:06.600 read: IOPS=47, BW=47.0MiB/s (49.3MB/s)(475MiB/10102msec) 00:28:06.600 slat (usec): min=395, max=2140.9k, avg=21100.28, stdev=152983.86 00:28:06.600 clat (msec): min=77, max=6275, avg=2362.21, stdev=2048.79 00:28:06.600 lat (msec): min=113, max=6281, avg=2383.31, stdev=2052.53 00:28:06.600 clat percentiles (msec): 00:28:06.600 | 1.00th=[ 136], 5.00th=[ 414], 10.00th=[ 625], 20.00th=[ 768], 00:28:06.600 | 30.00th=[ 844], 40.00th=[ 961], 50.00th=[ 1070], 60.00th=[ 1905], 00:28:06.600 | 70.00th=[ 2735], 80.00th=[ 5201], 90.00th=[ 5805], 95.00th=[ 6141], 00:28:06.600 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6275], 99.95th=[ 6275], 00:28:06.600 | 99.99th=[ 6275] 00:28:06.600 bw ( KiB/s): min=12288, max=174080, per=1.97%, avg=78961.78, stdev=60754.46, samples=9 00:28:06.600 iops : min= 12, max= 170, avg=77.11, stdev=59.33, samples=9 00:28:06.600 lat (msec) : 100=0.21%, 250=2.53%, 500=3.16%, 750=11.79%, 1000=30.74% 00:28:06.600 lat (msec) : 2000=11.79%, >=2000=39.79% 00:28:06.600 cpu : usr=0.02%, sys=1.32%, ctx=1473, majf=0, minf=32769 00:28:06.600 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.7% 00:28:06.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.600 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.600 issued rwts: total=475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.600 job5: (groupid=0, jobs=1): err= 0: pid=3652746: Sat Jul 13 21:14:55 2024 00:28:06.600 read: IOPS=93, BW=93.1MiB/s (97.6MB/s)(956MiB/10273msec) 00:28:06.600 slat (usec): min=43, max=1972.8k, avg=10629.14, stdev=67173.02 00:28:06.600 clat (msec): min=103, max=2890, avg=1265.63, stdev=688.44 00:28:06.600 lat (msec): min=663, max=2896, avg=1276.26, stdev=689.54 00:28:06.600 clat percentiles (msec): 00:28:06.600 | 1.00th=[ 667], 5.00th=[ 684], 10.00th=[ 718], 20.00th=[ 743], 00:28:06.600 | 30.00th=[ 852], 40.00th=[ 877], 50.00th=[ 919], 60.00th=[ 1150], 00:28:06.600 | 70.00th=[ 1351], 80.00th=[ 1519], 90.00th=[ 2769], 95.00th=[ 2836], 00:28:06.600 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 2903], 99.95th=[ 2903], 00:28:06.600 | 99.99th=[ 2903] 00:28:06.600 bw ( KiB/s): min=12288, max=198656, per=2.82%, avg=113049.60, stdev=60194.68, samples=15 00:28:06.600 iops : min= 12, max= 194, avg=110.40, stdev=58.78, samples=15 00:28:06.600 lat (msec) : 250=0.10%, 750=22.28%, 1000=30.96%, 2000=31.28%, >=2000=15.38% 00:28:06.600 cpu : usr=0.03%, sys=1.68%, ctx=1067, majf=0, minf=32769 00:28:06.600 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:28:06.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.600 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.600 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.600 job5: (groupid=0, jobs=1): err= 0: pid=3652747: Sat Jul 13 21:14:55 2024 00:28:06.600 read: IOPS=166, BW=166MiB/s (174MB/s)(1695MiB/10210msec) 00:28:06.600 slat (usec): min=42, max=2132.5k, avg=5895.85, stdev=52108.09 00:28:06.600 clat (msec): min=208, max=2926, avg=734.51, stdev=630.70 00:28:06.600 lat (msec): min=209, max=2929, avg=740.40, stdev=633.23 00:28:06.600 clat percentiles (msec): 00:28:06.600 | 1.00th=[ 222], 5.00th=[ 359], 10.00th=[ 368], 20.00th=[ 368], 00:28:06.600 | 30.00th=[ 372], 40.00th=[ 502], 50.00th=[ 542], 60.00th=[ 617], 00:28:06.600 | 70.00th=[ 735], 80.00th=[ 802], 90.00th=[ 1053], 95.00th=[ 2735], 00:28:06.600 | 99.00th=[ 2903], 99.50th=[ 2937], 99.90th=[ 2937], 99.95th=[ 2937], 00:28:06.600 | 99.99th=[ 2937] 00:28:06.600 bw ( KiB/s): min= 4096, max=362496, per=5.01%, avg=200672.56, stdev=101736.73, samples=16 00:28:06.600 iops : min= 4, max= 354, avg=195.94, stdev=99.34, samples=16 00:28:06.600 lat (msec) : 250=1.59%, 500=38.47%, 750=31.98%, 1000=16.46%, 2000=3.78% 00:28:06.600 lat (msec) : >=2000=7.73% 00:28:06.600 cpu : usr=0.05%, sys=2.43%, ctx=3762, majf=0, minf=32769 00:28:06.600 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:28:06.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.600 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.600 issued rwts: total=1695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.600 job5: (groupid=0, jobs=1): err= 0: pid=3652748: Sat Jul 13 21:14:55 2024 00:28:06.600 read: IOPS=112, BW=112MiB/s (118MB/s)(1154MiB/10279msec) 00:28:06.600 slat (usec): min=42, max=2157.7k, avg=8662.09, stdev=79105.62 00:28:06.600 clat (msec): min=277, max=3825, avg=927.09, stdev=879.03 00:28:06.600 lat (msec): min=281, max=3835, avg=935.75, stdev=883.69 00:28:06.600 clat percentiles (msec): 00:28:06.600 | 1.00th=[ 330], 5.00th=[ 384], 10.00th=[ 388], 20.00th=[ 388], 00:28:06.600 | 30.00th=[ 414], 40.00th=[ 527], 50.00th=[ 558], 60.00th=[ 600], 00:28:06.600 | 70.00th=[ 902], 80.00th=[ 1045], 90.00th=[ 2668], 95.00th=[ 3306], 00:28:06.600 | 99.00th=[ 3708], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:28:06.600 | 99.99th=[ 3809] 00:28:06.600 bw ( KiB/s): min=57344, max=331776, per=4.38%, avg=175220.50, stdev=102802.12, samples=12 00:28:06.600 iops : min= 56, max= 324, avg=171.08, stdev=100.34, samples=12 00:28:06.600 lat (msec) : 500=36.31%, 750=31.63%, 1000=9.01%, 2000=11.27%, >=2000=11.79% 00:28:06.600 cpu : usr=0.07%, sys=1.73%, ctx=1443, majf=0, minf=32769 00:28:06.600 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:28:06.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.600 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.600 issued rwts: total=1154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.600 job5: (groupid=0, jobs=1): err= 0: pid=3652749: Sat Jul 13 21:14:55 2024 00:28:06.600 read: IOPS=46, BW=46.5MiB/s (48.7MB/s)(484MiB/10412msec) 00:28:06.600 slat (usec): min=95, max=2149.1k, avg=21291.62, stdev=162429.59 00:28:06.600 clat (msec): min=104, max=7501, avg=2616.66, stdev=2582.54 00:28:06.600 lat (msec): min=859, max=7532, avg=2637.95, stdev=2586.26 00:28:06.600 clat percentiles (msec): 00:28:06.600 | 1.00th=[ 860], 5.00th=[ 885], 10.00th=[ 936], 20.00th=[ 969], 00:28:06.600 | 30.00th=[ 1083], 40.00th=[ 1116], 50.00th=[ 1133], 60.00th=[ 1167], 00:28:06.600 | 70.00th=[ 1217], 80.00th=[ 6678], 90.00th=[ 7148], 95.00th=[ 7349], 00:28:06.600 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7483], 99.95th=[ 7483], 00:28:06.600 | 99.99th=[ 7483] 00:28:06.600 bw ( KiB/s): min= 4096, max=153600, per=2.02%, avg=80964.22, stdev=51931.70, samples=9 00:28:06.600 iops : min= 4, max= 150, avg=78.89, stdev=50.63, samples=9 00:28:06.600 lat (msec) : 250=0.21%, 1000=23.97%, 2000=47.93%, >=2000=27.89% 00:28:06.601 cpu : usr=0.01%, sys=1.30%, ctx=1107, majf=0, minf=32769 00:28:06.601 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=87.0% 00:28:06.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.601 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:06.601 issued rwts: total=484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.601 job5: (groupid=0, jobs=1): err= 0: pid=3652750: Sat Jul 13 21:14:55 2024 00:28:06.601 read: IOPS=100, BW=100MiB/s (105MB/s)(1031MiB/10283msec) 00:28:06.601 slat (usec): min=477, max=2004.2k, avg=9944.12, stdev=78906.90 00:28:06.601 clat (msec): min=26, max=3937, avg=1084.79, stdev=945.08 00:28:06.601 lat (msec): min=265, max=3943, avg=1094.73, stdev=949.89 00:28:06.601 clat percentiles (msec): 00:28:06.601 | 1.00th=[ 266], 5.00th=[ 271], 10.00th=[ 305], 20.00th=[ 338], 00:28:06.601 | 30.00th=[ 368], 40.00th=[ 384], 50.00th=[ 617], 60.00th=[ 1053], 00:28:06.601 | 70.00th=[ 1485], 80.00th=[ 1737], 90.00th=[ 2869], 95.00th=[ 3037], 00:28:06.601 | 99.00th=[ 3675], 99.50th=[ 3775], 99.90th=[ 3910], 99.95th=[ 3943], 00:28:06.601 | 99.99th=[ 3943] 00:28:06.601 bw ( KiB/s): min=10240, max=406738, per=3.55%, avg=142194.62, stdev=129985.37, samples=13 00:28:06.601 iops : min= 10, max= 397, avg=138.85, stdev=126.90, samples=13 00:28:06.601 lat (msec) : 50=0.10%, 500=45.49%, 750=7.08%, 1000=6.30%, 2000=24.93% 00:28:06.601 lat (msec) : >=2000=16.10% 00:28:06.601 cpu : usr=0.04%, sys=1.34%, ctx=3000, majf=0, minf=32769 00:28:06.601 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:28:06.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.601 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.601 issued rwts: total=1031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.601 job5: (groupid=0, jobs=1): err= 0: pid=3652751: Sat Jul 13 21:14:55 2024 00:28:06.601 read: IOPS=98, BW=98.1MiB/s (103MB/s)(1009MiB/10288msec) 00:28:06.601 slat (usec): min=38, max=2046.0k, avg=10101.53, stdev=110311.80 00:28:06.601 clat (msec): min=90, max=6488, avg=1242.61, stdev=1434.74 00:28:06.601 lat (msec): min=251, max=6506, avg=1252.71, stdev=1438.45 00:28:06.601 clat percentiles (msec): 00:28:06.601 | 1.00th=[ 253], 5.00th=[ 259], 10.00th=[ 288], 20.00th=[ 355], 00:28:06.601 | 30.00th=[ 384], 40.00th=[ 384], 50.00th=[ 388], 60.00th=[ 456], 00:28:06.601 | 70.00th=[ 835], 80.00th=[ 2735], 90.00th=[ 4329], 95.00th=[ 4530], 00:28:06.601 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 6477], 99.95th=[ 6477], 00:28:06.601 | 99.99th=[ 6477] 00:28:06.601 bw ( KiB/s): min= 4096, max=444416, per=4.51%, avg=180428.80, stdev=156181.39, samples=10 00:28:06.601 iops : min= 4, max= 434, avg=176.20, stdev=152.52, samples=10 00:28:06.601 lat (msec) : 100=0.10%, 500=60.06%, 750=6.44%, 1000=6.54%, >=2000=26.86% 00:28:06.601 cpu : usr=0.04%, sys=1.68%, ctx=1185, majf=0, minf=32769 00:28:06.601 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:28:06.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.601 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:06.601 issued rwts: total=1009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:06.601 00:28:06.601 Run status group 0 (all jobs): 00:28:06.601 READ: bw=3911MiB/s (4101MB/s), 1293KiB/s-263MiB/s (1324kB/s-276MB/s), io=39.9GiB (42.8GB), run=10015-10448msec 00:28:06.601 00:28:06.601 Disk stats (read/write): 00:28:06.601 nvme0n1: ios=29845/0, merge=0/0, ticks=5910506/0, in_queue=5910506, util=97.99% 00:28:06.601 nvme1n1: ios=30694/0, merge=0/0, ticks=6317702/0, in_queue=6317702, util=98.25% 00:28:06.601 nvme2n1: ios=31911/0, merge=0/0, ticks=6631896/0, in_queue=6631896, util=98.54% 00:28:06.601 nvme3n1: ios=74593/0, merge=0/0, ticks=7432411/0, in_queue=7432411, util=98.67% 00:28:06.601 nvme4n1: ios=42183/0, merge=0/0, ticks=6682245/0, in_queue=6682245, util=98.91% 00:28:06.601 nvme5n1: ios=115597/0, merge=0/0, ticks=7048501/0, in_queue=7048501, util=98.83% 00:28:06.601 21:14:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:28:06.601 21:14:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:28:06.601 21:14:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:06.601 21:14:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:28:06.601 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000000 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000000 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:06.601 21:14:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:07.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000001 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000001 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:07.539 21:14:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:08.476 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000002 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000002 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:08.476 21:14:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:09.413 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000003 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000003 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:09.413 21:15:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:10.350 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000004 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000004 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:10.350 21:15:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:11.286 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000005 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000005 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.286 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:11.287 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:28:11.287 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.287 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:28:11.544 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:11.545 rmmod nvme_rdma 00:28:11.545 rmmod nvme_fabrics 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 3651215 ']' 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 3651215 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@946 -- # '[' -z 3651215 ']' 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # kill -0 3651215 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@951 -- # uname 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3651215 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3651215' 00:28:11.545 killing process with pid 3651215 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@965 -- # kill 3651215 00:28:11.545 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@970 -- # wait 3651215 00:28:11.803 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:11.803 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:11.803 00:28:11.803 real 0m31.324s 00:28:11.803 user 1m48.056s 00:28:11.803 sys 0m16.726s 00:28:11.803 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:11.803 21:15:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:11.803 ************************************ 00:28:11.803 END TEST nvmf_srq_overwhelm 00:28:11.803 ************************************ 00:28:11.803 21:15:02 nvmf_rdma -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:11.803 21:15:02 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:11.803 21:15:02 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:11.803 21:15:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:12.062 ************************************ 00:28:12.062 START TEST nvmf_shutdown 00:28:12.062 ************************************ 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:12.062 * Looking for test storage... 00:28:12.062 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:12.062 ************************************ 00:28:12.062 START TEST nvmf_shutdown_tc1 00:28:12.062 ************************************ 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.062 21:15:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:18.630 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:18.630 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:18.630 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:18.630 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:18.630 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:18.890 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:18.891 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:18.891 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:18.891 altname enp217s0f0np0 00:28:18.891 altname ens818f0np0 00:28:18.891 inet 192.168.100.8/24 scope global mlx_0_0 00:28:18.891 valid_lft forever preferred_lft forever 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:18.891 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:18.891 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:18.891 altname enp217s0f1np1 00:28:18.891 altname ens818f1np1 00:28:18.891 inet 192.168.100.9/24 scope global mlx_0_1 00:28:18.891 valid_lft forever preferred_lft forever 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:18.891 192.168.100.9' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:18.891 192.168.100.9' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:18.891 192.168.100.9' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3659279 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3659279 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3659279 ']' 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:18.891 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.891 [2024-07-13 21:15:09.752958] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:18.891 [2024-07-13 21:15:09.753029] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.150 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.150 [2024-07-13 21:15:09.824044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.150 [2024-07-13 21:15:09.863580] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.151 [2024-07-13 21:15:09.863623] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.151 [2024-07-13 21:15:09.863633] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.151 [2024-07-13 21:15:09.863641] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.151 [2024-07-13 21:15:09.863648] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.151 [2024-07-13 21:15:09.863750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.151 [2024-07-13 21:15:09.863838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.151 [2024-07-13 21:15:09.863948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.151 [2024-07-13 21:15:09.863949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:19.151 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:19.151 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:28:19.151 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.151 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.151 21:15:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.151 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.151 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:19.151 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.151 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.151 [2024-07-13 21:15:10.033769] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe95f70/0xe9a460) succeed. 00:28:19.411 [2024-07-13 21:15:10.044292] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe975b0/0xedbaf0) succeed. 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.411 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.411 Malloc1 00:28:19.411 [2024-07-13 21:15:10.268282] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:19.411 Malloc2 00:28:19.670 Malloc3 00:28:19.670 Malloc4 00:28:19.670 Malloc5 00:28:19.670 Malloc6 00:28:19.670 Malloc7 00:28:19.670 Malloc8 00:28:19.930 Malloc9 00:28:19.930 Malloc10 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3659487 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3659487 /var/tmp/bdevperf.sock 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3659487 ']' 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.930 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.930 { 00:28:19.930 "params": { 00:28:19.930 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.931 { 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.931 { 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.931 { 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.931 { 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 [2024-07-13 21:15:10.750055] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:19.931 [2024-07-13 21:15:10.750109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.931 { 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.931 { 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.931 { 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.931 { 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.931 { 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme$subsystem", 00:28:19.931 "trtype": "$TEST_TRANSPORT", 00:28:19.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "$NVMF_PORT", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.931 "hdgst": ${hdgst:-false}, 00:28:19.931 "ddgst": ${ddgst:-false} 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 } 00:28:19.931 EOF 00:28:19.931 )") 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:19.931 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:19.931 21:15:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme1", 00:28:19.931 "trtype": "rdma", 00:28:19.931 "traddr": "192.168.100.8", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "4420", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:19.931 "hdgst": false, 00:28:19.931 "ddgst": false 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 },{ 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme2", 00:28:19.931 "trtype": "rdma", 00:28:19.931 "traddr": "192.168.100.8", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "4420", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:19.931 "hdgst": false, 00:28:19.931 "ddgst": false 00:28:19.931 }, 00:28:19.931 "method": "bdev_nvme_attach_controller" 00:28:19.931 },{ 00:28:19.931 "params": { 00:28:19.931 "name": "Nvme3", 00:28:19.931 "trtype": "rdma", 00:28:19.931 "traddr": "192.168.100.8", 00:28:19.931 "adrfam": "ipv4", 00:28:19.931 "trsvcid": "4420", 00:28:19.931 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:19.931 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:19.931 "hdgst": false, 00:28:19.931 "ddgst": false 00:28:19.932 }, 00:28:19.932 "method": "bdev_nvme_attach_controller" 00:28:19.932 },{ 00:28:19.932 "params": { 00:28:19.932 "name": "Nvme4", 00:28:19.932 "trtype": "rdma", 00:28:19.932 "traddr": "192.168.100.8", 00:28:19.932 "adrfam": "ipv4", 00:28:19.932 "trsvcid": "4420", 00:28:19.932 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:19.932 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:19.932 "hdgst": false, 00:28:19.932 "ddgst": false 00:28:19.932 }, 00:28:19.932 "method": "bdev_nvme_attach_controller" 00:28:19.932 },{ 00:28:19.932 "params": { 00:28:19.932 "name": "Nvme5", 00:28:19.932 "trtype": "rdma", 00:28:19.932 "traddr": "192.168.100.8", 00:28:19.932 "adrfam": "ipv4", 00:28:19.932 "trsvcid": "4420", 00:28:19.932 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:19.932 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:19.932 "hdgst": false, 00:28:19.932 "ddgst": false 00:28:19.932 }, 00:28:19.932 "method": "bdev_nvme_attach_controller" 00:28:19.932 },{ 00:28:19.932 "params": { 00:28:19.932 "name": "Nvme6", 00:28:19.932 "trtype": "rdma", 00:28:19.932 "traddr": "192.168.100.8", 00:28:19.932 "adrfam": "ipv4", 00:28:19.932 "trsvcid": "4420", 00:28:19.932 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:19.932 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:19.932 "hdgst": false, 00:28:19.932 "ddgst": false 00:28:19.932 }, 00:28:19.932 "method": "bdev_nvme_attach_controller" 00:28:19.932 },{ 00:28:19.932 "params": { 00:28:19.932 "name": "Nvme7", 00:28:19.932 "trtype": "rdma", 00:28:19.932 "traddr": "192.168.100.8", 00:28:19.932 "adrfam": "ipv4", 00:28:19.932 "trsvcid": "4420", 00:28:19.932 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:19.932 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:19.932 "hdgst": false, 00:28:19.932 "ddgst": false 00:28:19.932 }, 00:28:19.932 "method": "bdev_nvme_attach_controller" 00:28:19.932 },{ 00:28:19.932 "params": { 00:28:19.932 "name": "Nvme8", 00:28:19.932 "trtype": "rdma", 00:28:19.932 "traddr": "192.168.100.8", 00:28:19.932 "adrfam": "ipv4", 00:28:19.932 "trsvcid": "4420", 00:28:19.932 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:19.932 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:19.932 "hdgst": false, 00:28:19.932 "ddgst": false 00:28:19.932 }, 00:28:19.932 "method": "bdev_nvme_attach_controller" 00:28:19.932 },{ 00:28:19.932 "params": { 00:28:19.932 "name": "Nvme9", 00:28:19.932 "trtype": "rdma", 00:28:19.932 "traddr": "192.168.100.8", 00:28:19.932 "adrfam": "ipv4", 00:28:19.932 "trsvcid": "4420", 00:28:19.932 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:19.932 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:19.932 "hdgst": false, 00:28:19.932 "ddgst": false 00:28:19.932 }, 00:28:19.932 "method": "bdev_nvme_attach_controller" 00:28:19.932 },{ 00:28:19.932 "params": { 00:28:19.932 "name": "Nvme10", 00:28:19.932 "trtype": "rdma", 00:28:19.932 "traddr": "192.168.100.8", 00:28:19.932 "adrfam": "ipv4", 00:28:19.932 "trsvcid": "4420", 00:28:19.932 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:19.932 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:19.932 "hdgst": false, 00:28:19.932 "ddgst": false 00:28:19.932 }, 00:28:19.932 "method": "bdev_nvme_attach_controller" 00:28:19.932 }' 00:28:20.192 [2024-07-13 21:15:10.825867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.192 [2024-07-13 21:15:10.864395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.130 21:15:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:21.130 21:15:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:28:21.130 21:15:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:21.130 21:15:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.130 21:15:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:21.130 21:15:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.130 21:15:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3659487 00:28:21.130 21:15:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:21.130 21:15:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:22.068 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3659487 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3659279 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.068 { 00:28:22.068 "params": { 00:28:22.068 "name": "Nvme$subsystem", 00:28:22.068 "trtype": "$TEST_TRANSPORT", 00:28:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.068 "adrfam": "ipv4", 00:28:22.068 "trsvcid": "$NVMF_PORT", 00:28:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.068 "hdgst": ${hdgst:-false}, 00:28:22.068 "ddgst": ${ddgst:-false} 00:28:22.068 }, 00:28:22.068 "method": "bdev_nvme_attach_controller" 00:28:22.068 } 00:28:22.068 EOF 00:28:22.068 )") 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.068 { 00:28:22.068 "params": { 00:28:22.068 "name": "Nvme$subsystem", 00:28:22.068 "trtype": "$TEST_TRANSPORT", 00:28:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.068 "adrfam": "ipv4", 00:28:22.068 "trsvcid": "$NVMF_PORT", 00:28:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.068 "hdgst": ${hdgst:-false}, 00:28:22.068 "ddgst": ${ddgst:-false} 00:28:22.068 }, 00:28:22.068 "method": "bdev_nvme_attach_controller" 00:28:22.068 } 00:28:22.068 EOF 00:28:22.068 )") 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.068 { 00:28:22.068 "params": { 00:28:22.068 "name": "Nvme$subsystem", 00:28:22.068 "trtype": "$TEST_TRANSPORT", 00:28:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.068 "adrfam": "ipv4", 00:28:22.068 "trsvcid": "$NVMF_PORT", 00:28:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.068 "hdgst": ${hdgst:-false}, 00:28:22.068 "ddgst": ${ddgst:-false} 00:28:22.068 }, 00:28:22.068 "method": "bdev_nvme_attach_controller" 00:28:22.068 } 00:28:22.068 EOF 00:28:22.068 )") 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.068 { 00:28:22.068 "params": { 00:28:22.068 "name": "Nvme$subsystem", 00:28:22.068 "trtype": "$TEST_TRANSPORT", 00:28:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.068 "adrfam": "ipv4", 00:28:22.068 "trsvcid": "$NVMF_PORT", 00:28:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.068 "hdgst": ${hdgst:-false}, 00:28:22.068 "ddgst": ${ddgst:-false} 00:28:22.068 }, 00:28:22.068 "method": "bdev_nvme_attach_controller" 00:28:22.068 } 00:28:22.068 EOF 00:28:22.068 )") 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.068 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.068 { 00:28:22.068 "params": { 00:28:22.068 "name": "Nvme$subsystem", 00:28:22.068 "trtype": "$TEST_TRANSPORT", 00:28:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.068 "adrfam": "ipv4", 00:28:22.068 "trsvcid": "$NVMF_PORT", 00:28:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.068 "hdgst": ${hdgst:-false}, 00:28:22.069 "ddgst": ${ddgst:-false} 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 } 00:28:22.069 EOF 00:28:22.069 )") 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.069 { 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme$subsystem", 00:28:22.069 "trtype": "$TEST_TRANSPORT", 00:28:22.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "$NVMF_PORT", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.069 "hdgst": ${hdgst:-false}, 00:28:22.069 "ddgst": ${ddgst:-false} 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 } 00:28:22.069 EOF 00:28:22.069 )") 00:28:22.069 [2024-07-13 21:15:12.775423] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:22.069 [2024-07-13 21:15:12.775479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659795 ] 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.069 { 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme$subsystem", 00:28:22.069 "trtype": "$TEST_TRANSPORT", 00:28:22.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "$NVMF_PORT", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.069 "hdgst": ${hdgst:-false}, 00:28:22.069 "ddgst": ${ddgst:-false} 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 } 00:28:22.069 EOF 00:28:22.069 )") 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.069 { 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme$subsystem", 00:28:22.069 "trtype": "$TEST_TRANSPORT", 00:28:22.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "$NVMF_PORT", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.069 "hdgst": ${hdgst:-false}, 00:28:22.069 "ddgst": ${ddgst:-false} 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 } 00:28:22.069 EOF 00:28:22.069 )") 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.069 { 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme$subsystem", 00:28:22.069 "trtype": "$TEST_TRANSPORT", 00:28:22.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "$NVMF_PORT", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.069 "hdgst": ${hdgst:-false}, 00:28:22.069 "ddgst": ${ddgst:-false} 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 } 00:28:22.069 EOF 00:28:22.069 )") 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.069 { 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme$subsystem", 00:28:22.069 "trtype": "$TEST_TRANSPORT", 00:28:22.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "$NVMF_PORT", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.069 "hdgst": ${hdgst:-false}, 00:28:22.069 "ddgst": ${ddgst:-false} 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 } 00:28:22.069 EOF 00:28:22.069 )") 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:22.069 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:22.069 21:15:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme1", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 },{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme2", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 },{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme3", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 },{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme4", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 },{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme5", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 },{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme6", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 },{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme7", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 },{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme8", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 },{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme9", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 },{ 00:28:22.069 "params": { 00:28:22.069 "name": "Nvme10", 00:28:22.069 "trtype": "rdma", 00:28:22.069 "traddr": "192.168.100.8", 00:28:22.069 "adrfam": "ipv4", 00:28:22.069 "trsvcid": "4420", 00:28:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:22.069 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:22.069 "hdgst": false, 00:28:22.069 "ddgst": false 00:28:22.069 }, 00:28:22.069 "method": "bdev_nvme_attach_controller" 00:28:22.069 }' 00:28:22.069 [2024-07-13 21:15:12.849372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.069 [2024-07-13 21:15:12.888237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.005 Running I/O for 1 seconds... 00:28:24.383 00:28:24.383 Latency(us) 00:28:24.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.383 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme1n1 : 1.16 398.86 24.93 0.00 0.00 157865.60 7287.60 209715.20 00:28:24.383 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme2n1 : 1.16 401.14 25.07 0.00 0.00 154368.04 5164.24 157705.83 00:28:24.383 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme3n1 : 1.17 411.92 25.75 0.00 0.00 148677.43 10118.76 150994.94 00:28:24.383 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme4n1 : 1.17 408.12 25.51 0.00 0.00 147990.20 10276.04 143445.20 00:28:24.383 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme5n1 : 1.17 387.32 24.21 0.00 0.00 153334.52 10013.90 130023.42 00:28:24.383 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme6n1 : 1.17 392.20 24.51 0.00 0.00 149535.29 9804.19 118279.37 00:28:24.383 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme7n1 : 1.17 410.66 25.67 0.00 0.00 141254.66 9961.47 111568.49 00:28:24.383 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme8n1 : 1.17 397.52 24.85 0.00 0.00 143548.31 8178.89 100663.30 00:28:24.383 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme9n1 : 1.16 386.00 24.13 0.00 0.00 146609.30 8598.32 101082.73 00:28:24.383 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.383 Verification LBA range: start 0x0 length 0x400 00:28:24.383 Nvme10n1 : 1.16 275.34 17.21 0.00 0.00 202438.57 9489.61 364065.59 00:28:24.383 =================================================================================================================== 00:28:24.383 Total : 3869.08 241.82 0.00 0.00 152983.57 5164.24 364065.59 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:24.383 rmmod nvme_rdma 00:28:24.383 rmmod nvme_fabrics 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3659279 ']' 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3659279 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3659279 ']' 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3659279 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:24.383 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3659279 00:28:24.642 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:24.642 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:24.642 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3659279' 00:28:24.642 killing process with pid 3659279 00:28:24.642 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3659279 00:28:24.642 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3659279 00:28:24.902 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:24.902 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:24.902 00:28:24.902 real 0m12.897s 00:28:24.902 user 0m28.011s 00:28:24.902 sys 0m6.327s 00:28:24.902 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:24.902 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.902 ************************************ 00:28:24.902 END TEST nvmf_shutdown_tc1 00:28:24.902 ************************************ 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.162 ************************************ 00:28:25.162 START TEST nvmf_shutdown_tc2 00:28:25.162 ************************************ 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:25.162 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:25.162 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:25.162 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:25.162 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:25.162 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:25.163 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:25.163 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:25.163 altname enp217s0f0np0 00:28:25.163 altname ens818f0np0 00:28:25.163 inet 192.168.100.8/24 scope global mlx_0_0 00:28:25.163 valid_lft forever preferred_lft forever 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:25.163 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:25.163 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:25.163 altname enp217s0f1np1 00:28:25.163 altname ens818f1np1 00:28:25.163 inet 192.168.100.9/24 scope global mlx_0_1 00:28:25.163 valid_lft forever preferred_lft forever 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:25.163 21:15:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:25.163 192.168.100.9' 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:25.163 192.168.100.9' 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:25.163 192.168.100.9' 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:28:25.163 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3660472 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3660472 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3660472 ']' 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:25.471 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.471 [2024-07-13 21:15:16.139391] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:25.471 [2024-07-13 21:15:16.139446] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.471 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.471 [2024-07-13 21:15:16.215150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.471 [2024-07-13 21:15:16.255297] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.471 [2024-07-13 21:15:16.255343] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.471 [2024-07-13 21:15:16.255352] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.471 [2024-07-13 21:15:16.255360] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.471 [2024-07-13 21:15:16.255366] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.471 [2024-07-13 21:15:16.255485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.471 [2024-07-13 21:15:16.255551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.471 [2024-07-13 21:15:16.255643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.471 [2024-07-13 21:15:16.255644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.768 [2024-07-13 21:15:16.444422] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb3ff70/0xb44460) succeed. 00:28:25.768 [2024-07-13 21:15:16.454775] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb415b0/0xb85af0) succeed. 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.768 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.769 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:25.769 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:25.769 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:25.769 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.769 21:15:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.769 Malloc1 00:28:26.027 [2024-07-13 21:15:16.676433] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:26.027 Malloc2 00:28:26.027 Malloc3 00:28:26.027 Malloc4 00:28:26.027 Malloc5 00:28:26.027 Malloc6 00:28:26.287 Malloc7 00:28:26.287 Malloc8 00:28:26.287 Malloc9 00:28:26.287 Malloc10 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3660738 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3660738 /var/tmp/bdevperf.sock 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3660738 ']' 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:26.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.287 { 00:28:26.287 "params": { 00:28:26.287 "name": "Nvme$subsystem", 00:28:26.287 "trtype": "$TEST_TRANSPORT", 00:28:26.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.287 "adrfam": "ipv4", 00:28:26.287 "trsvcid": "$NVMF_PORT", 00:28:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.287 "hdgst": ${hdgst:-false}, 00:28:26.287 "ddgst": ${ddgst:-false} 00:28:26.287 }, 00:28:26.287 "method": "bdev_nvme_attach_controller" 00:28:26.287 } 00:28:26.287 EOF 00:28:26.287 )") 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.287 { 00:28:26.287 "params": { 00:28:26.287 "name": "Nvme$subsystem", 00:28:26.287 "trtype": "$TEST_TRANSPORT", 00:28:26.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.287 "adrfam": "ipv4", 00:28:26.287 "trsvcid": "$NVMF_PORT", 00:28:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.287 "hdgst": ${hdgst:-false}, 00:28:26.287 "ddgst": ${ddgst:-false} 00:28:26.287 }, 00:28:26.287 "method": "bdev_nvme_attach_controller" 00:28:26.287 } 00:28:26.287 EOF 00:28:26.287 )") 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.287 { 00:28:26.287 "params": { 00:28:26.287 "name": "Nvme$subsystem", 00:28:26.287 "trtype": "$TEST_TRANSPORT", 00:28:26.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.287 "adrfam": "ipv4", 00:28:26.287 "trsvcid": "$NVMF_PORT", 00:28:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.287 "hdgst": ${hdgst:-false}, 00:28:26.287 "ddgst": ${ddgst:-false} 00:28:26.287 }, 00:28:26.287 "method": "bdev_nvme_attach_controller" 00:28:26.287 } 00:28:26.287 EOF 00:28:26.287 )") 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.287 { 00:28:26.287 "params": { 00:28:26.287 "name": "Nvme$subsystem", 00:28:26.287 "trtype": "$TEST_TRANSPORT", 00:28:26.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.287 "adrfam": "ipv4", 00:28:26.287 "trsvcid": "$NVMF_PORT", 00:28:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.287 "hdgst": ${hdgst:-false}, 00:28:26.287 "ddgst": ${ddgst:-false} 00:28:26.287 }, 00:28:26.287 "method": "bdev_nvme_attach_controller" 00:28:26.287 } 00:28:26.287 EOF 00:28:26.287 )") 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.287 { 00:28:26.287 "params": { 00:28:26.287 "name": "Nvme$subsystem", 00:28:26.287 "trtype": "$TEST_TRANSPORT", 00:28:26.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.287 "adrfam": "ipv4", 00:28:26.287 "trsvcid": "$NVMF_PORT", 00:28:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.287 "hdgst": ${hdgst:-false}, 00:28:26.287 "ddgst": ${ddgst:-false} 00:28:26.287 }, 00:28:26.287 "method": "bdev_nvme_attach_controller" 00:28:26.287 } 00:28:26.287 EOF 00:28:26.287 )") 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.287 [2024-07-13 21:15:17.158942] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:26.287 [2024-07-13 21:15:17.158993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660738 ] 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.287 { 00:28:26.287 "params": { 00:28:26.287 "name": "Nvme$subsystem", 00:28:26.287 "trtype": "$TEST_TRANSPORT", 00:28:26.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.287 "adrfam": "ipv4", 00:28:26.287 "trsvcid": "$NVMF_PORT", 00:28:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.287 "hdgst": ${hdgst:-false}, 00:28:26.287 "ddgst": ${ddgst:-false} 00:28:26.287 }, 00:28:26.287 "method": "bdev_nvme_attach_controller" 00:28:26.287 } 00:28:26.287 EOF 00:28:26.287 )") 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.287 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.287 { 00:28:26.287 "params": { 00:28:26.287 "name": "Nvme$subsystem", 00:28:26.287 "trtype": "$TEST_TRANSPORT", 00:28:26.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.288 "adrfam": "ipv4", 00:28:26.288 "trsvcid": "$NVMF_PORT", 00:28:26.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.288 "hdgst": ${hdgst:-false}, 00:28:26.288 "ddgst": ${ddgst:-false} 00:28:26.288 }, 00:28:26.288 "method": "bdev_nvme_attach_controller" 00:28:26.288 } 00:28:26.288 EOF 00:28:26.288 )") 00:28:26.288 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.288 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.288 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.288 { 00:28:26.288 "params": { 00:28:26.288 "name": "Nvme$subsystem", 00:28:26.288 "trtype": "$TEST_TRANSPORT", 00:28:26.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.288 "adrfam": "ipv4", 00:28:26.288 "trsvcid": "$NVMF_PORT", 00:28:26.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.288 "hdgst": ${hdgst:-false}, 00:28:26.288 "ddgst": ${ddgst:-false} 00:28:26.288 }, 00:28:26.288 "method": "bdev_nvme_attach_controller" 00:28:26.288 } 00:28:26.288 EOF 00:28:26.288 )") 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.547 { 00:28:26.547 "params": { 00:28:26.547 "name": "Nvme$subsystem", 00:28:26.547 "trtype": "$TEST_TRANSPORT", 00:28:26.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.547 "adrfam": "ipv4", 00:28:26.547 "trsvcid": "$NVMF_PORT", 00:28:26.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.547 "hdgst": ${hdgst:-false}, 00:28:26.547 "ddgst": ${ddgst:-false} 00:28:26.547 }, 00:28:26.547 "method": "bdev_nvme_attach_controller" 00:28:26.547 } 00:28:26.547 EOF 00:28:26.547 )") 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.547 { 00:28:26.547 "params": { 00:28:26.547 "name": "Nvme$subsystem", 00:28:26.547 "trtype": "$TEST_TRANSPORT", 00:28:26.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.547 "adrfam": "ipv4", 00:28:26.547 "trsvcid": "$NVMF_PORT", 00:28:26.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.547 "hdgst": ${hdgst:-false}, 00:28:26.547 "ddgst": ${ddgst:-false} 00:28:26.547 }, 00:28:26.547 "method": "bdev_nvme_attach_controller" 00:28:26.547 } 00:28:26.547 EOF 00:28:26.547 )") 00:28:26.547 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:26.547 21:15:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.547 "params": { 00:28:26.547 "name": "Nvme1", 00:28:26.547 "trtype": "rdma", 00:28:26.547 "traddr": "192.168.100.8", 00:28:26.547 "adrfam": "ipv4", 00:28:26.547 "trsvcid": "4420", 00:28:26.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:26.547 "hdgst": false, 00:28:26.547 "ddgst": false 00:28:26.547 }, 00:28:26.547 "method": "bdev_nvme_attach_controller" 00:28:26.547 },{ 00:28:26.547 "params": { 00:28:26.547 "name": "Nvme2", 00:28:26.547 "trtype": "rdma", 00:28:26.547 "traddr": "192.168.100.8", 00:28:26.547 "adrfam": "ipv4", 00:28:26.547 "trsvcid": "4420", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:26.548 "hdgst": false, 00:28:26.548 "ddgst": false 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 },{ 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme3", 00:28:26.548 "trtype": "rdma", 00:28:26.548 "traddr": "192.168.100.8", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "4420", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:26.548 "hdgst": false, 00:28:26.548 "ddgst": false 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 },{ 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme4", 00:28:26.548 "trtype": "rdma", 00:28:26.548 "traddr": "192.168.100.8", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "4420", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:26.548 "hdgst": false, 00:28:26.548 "ddgst": false 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 },{ 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme5", 00:28:26.548 "trtype": "rdma", 00:28:26.548 "traddr": "192.168.100.8", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "4420", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:26.548 "hdgst": false, 00:28:26.548 "ddgst": false 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 },{ 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme6", 00:28:26.548 "trtype": "rdma", 00:28:26.548 "traddr": "192.168.100.8", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "4420", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:26.548 "hdgst": false, 00:28:26.548 "ddgst": false 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 },{ 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme7", 00:28:26.548 "trtype": "rdma", 00:28:26.548 "traddr": "192.168.100.8", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "4420", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:26.548 "hdgst": false, 00:28:26.548 "ddgst": false 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 },{ 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme8", 00:28:26.548 "trtype": "rdma", 00:28:26.548 "traddr": "192.168.100.8", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "4420", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:26.548 "hdgst": false, 00:28:26.548 "ddgst": false 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 },{ 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme9", 00:28:26.548 "trtype": "rdma", 00:28:26.548 "traddr": "192.168.100.8", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "4420", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:26.548 "hdgst": false, 00:28:26.548 "ddgst": false 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 },{ 00:28:26.548 "params": { 00:28:26.548 "name": "Nvme10", 00:28:26.548 "trtype": "rdma", 00:28:26.548 "traddr": "192.168.100.8", 00:28:26.548 "adrfam": "ipv4", 00:28:26.548 "trsvcid": "4420", 00:28:26.548 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:26.548 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:26.548 "hdgst": false, 00:28:26.548 "ddgst": false 00:28:26.548 }, 00:28:26.548 "method": "bdev_nvme_attach_controller" 00:28:26.548 }' 00:28:26.548 [2024-07-13 21:15:17.231938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.548 [2024-07-13 21:15:17.270573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.485 Running I/O for 10 seconds... 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.485 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.743 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.743 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=19 00:28:27.743 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 19 -ge 100 ']' 00:28:27.743 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=179 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 179 -ge 100 ']' 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3660738 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3660738 ']' 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3660738 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:28.002 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3660738 00:28:28.261 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:28.261 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:28.261 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3660738' 00:28:28.261 killing process with pid 3660738 00:28:28.261 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3660738 00:28:28.261 21:15:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3660738 00:28:28.261 Received shutdown signal, test time was about 0.820486 seconds 00:28:28.261 00:28:28.261 Latency(us) 00:28:28.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.261 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme1n1 : 0.81 376.83 23.55 0.00 0.00 166270.87 5924.45 233203.30 00:28:28.261 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme2n1 : 0.81 396.11 24.76 0.00 0.00 154897.20 7549.75 161900.13 00:28:28.261 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme3n1 : 0.81 395.57 24.72 0.00 0.00 152155.01 7811.89 156028.11 00:28:28.261 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme4n1 : 0.81 395.02 24.69 0.00 0.00 149433.71 8074.04 149317.22 00:28:28.261 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme5n1 : 0.81 394.38 24.65 0.00 0.00 147084.25 8650.75 138412.03 00:28:28.261 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme6n1 : 0.81 393.83 24.61 0.00 0.00 143938.85 8965.32 131701.15 00:28:28.261 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme7n1 : 0.81 393.30 24.58 0.00 0.00 141148.16 9227.47 124990.26 00:28:28.261 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme8n1 : 0.81 392.77 24.55 0.00 0.00 138328.15 9489.61 117440.51 00:28:28.261 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme9n1 : 0.82 392.14 24.51 0.00 0.00 136017.02 9961.47 106954.75 00:28:28.261 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:28.261 Verification LBA range: start 0x0 length 0x400 00:28:28.261 Nvme10n1 : 0.82 312.25 19.52 0.00 0.00 167340.80 2726.30 238236.47 00:28:28.261 =================================================================================================================== 00:28:28.261 Total : 3842.20 240.14 0.00 0.00 149213.57 2726.30 238236.47 00:28:28.519 21:15:19 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3660472 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:29.465 rmmod nvme_rdma 00:28:29.465 rmmod nvme_fabrics 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3660472 ']' 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3660472 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3660472 ']' 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3660472 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3660472 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3660472' 00:28:29.465 killing process with pid 3660472 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3660472 00:28:29.465 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3660472 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:30.033 00:28:30.033 real 0m4.952s 00:28:30.033 user 0m19.797s 00:28:30.033 sys 0m1.164s 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.033 ************************************ 00:28:30.033 END TEST nvmf_shutdown_tc2 00:28:30.033 ************************************ 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:30.033 ************************************ 00:28:30.033 START TEST nvmf_shutdown_tc3 00:28:30.033 ************************************ 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.033 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:30.034 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:30.034 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:30.034 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:30.034 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:30.034 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:30.294 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:30.294 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:30.294 altname enp217s0f0np0 00:28:30.294 altname ens818f0np0 00:28:30.294 inet 192.168.100.8/24 scope global mlx_0_0 00:28:30.294 valid_lft forever preferred_lft forever 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:30.294 21:15:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:30.294 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:30.294 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:30.294 altname enp217s0f1np1 00:28:30.294 altname ens818f1np1 00:28:30.294 inet 192.168.100.9/24 scope global mlx_0_1 00:28:30.294 valid_lft forever preferred_lft forever 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:30.294 192.168.100.9' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:30.294 192.168.100.9' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:30.294 192.168.100.9' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:30.294 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3661513 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3661513 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3661513 ']' 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:30.295 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:30.295 [2024-07-13 21:15:21.146236] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:30.295 [2024-07-13 21:15:21.146286] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.295 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.554 [2024-07-13 21:15:21.217170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:30.554 [2024-07-13 21:15:21.256658] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.554 [2024-07-13 21:15:21.256700] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.554 [2024-07-13 21:15:21.256711] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.554 [2024-07-13 21:15:21.256719] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.554 [2024-07-13 21:15:21.256727] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.554 [2024-07-13 21:15:21.256836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.554 [2024-07-13 21:15:21.256923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.554 [2024-07-13 21:15:21.257022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.554 [2024-07-13 21:15:21.257034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:31.121 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:31.121 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:31.121 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:31.121 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.121 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:31.121 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.121 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:31.121 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.121 21:15:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:31.380 [2024-07-13 21:15:22.016483] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xab9f70/0xabe460) succeed. 00:28:31.380 [2024-07-13 21:15:22.026900] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xabb5b0/0xaffaf0) succeed. 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.380 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:31.380 Malloc1 00:28:31.380 [2024-07-13 21:15:22.244956] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:31.380 Malloc2 00:28:31.639 Malloc3 00:28:31.639 Malloc4 00:28:31.639 Malloc5 00:28:31.639 Malloc6 00:28:31.639 Malloc7 00:28:31.639 Malloc8 00:28:31.898 Malloc9 00:28:31.898 Malloc10 00:28:31.898 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.898 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:31.898 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.898 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:31.898 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3661801 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3661801 /var/tmp/bdevperf.sock 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3661801 ']' 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:31.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 [2024-07-13 21:15:22.727501] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:31.899 [2024-07-13 21:15:22.727555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661801 ] 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.899 { 00:28:31.899 "params": { 00:28:31.899 "name": "Nvme$subsystem", 00:28:31.899 "trtype": "$TEST_TRANSPORT", 00:28:31.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.899 "adrfam": "ipv4", 00:28:31.899 "trsvcid": "$NVMF_PORT", 00:28:31.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.899 "hdgst": ${hdgst:-false}, 00:28:31.899 "ddgst": ${ddgst:-false} 00:28:31.899 }, 00:28:31.899 "method": "bdev_nvme_attach_controller" 00:28:31.899 } 00:28:31.899 EOF 00:28:31.899 )") 00:28:31.899 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:31.899 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:31.900 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:31.900 21:15:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme1", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 },{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme2", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 },{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme3", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 },{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme4", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 },{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme5", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 },{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme6", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 },{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme7", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 },{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme8", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 },{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme9", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 },{ 00:28:31.900 "params": { 00:28:31.900 "name": "Nvme10", 00:28:31.900 "trtype": "rdma", 00:28:31.900 "traddr": "192.168.100.8", 00:28:31.900 "adrfam": "ipv4", 00:28:31.900 "trsvcid": "4420", 00:28:31.900 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:31.900 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:31.900 "hdgst": false, 00:28:31.900 "ddgst": false 00:28:31.900 }, 00:28:31.900 "method": "bdev_nvme_attach_controller" 00:28:31.900 }' 00:28:32.160 [2024-07-13 21:15:22.802912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.160 [2024-07-13 21:15:22.841621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.093 Running I/O for 10 seconds... 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.093 21:15:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.351 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.351 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=19 00:28:33.351 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 19 -ge 100 ']' 00:28:33.351 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=174 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 174 -ge 100 ']' 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3661513 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3661513 ']' 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3661513 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3661513 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3661513' 00:28:33.611 killing process with pid 3661513 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3661513 00:28:33.611 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3661513 00:28:34.179 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:34.179 21:15:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:34.749 [2024-07-13 21:15:25.486455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.486500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:873131b0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.486514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.486523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:873131b0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.486532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.486541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:873131b0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.486550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.486559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:873131b0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.488977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.750 [2024-07-13 21:15:25.489037] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:34.750 [2024-07-13 21:15:25.489093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.489128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.489161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.489192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.489224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.489256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.489288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.489320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.491820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.750 [2024-07-13 21:15:25.491873] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:34.750 [2024-07-13 21:15:25.491931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.491941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.491951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.491959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.491969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.491979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.491987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.491996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.494465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.750 [2024-07-13 21:15:25.494506] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.750 [2024-07-13 21:15:25.494557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.494590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.494623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.494654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.494687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.494718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.494757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.494769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.497210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.750 [2024-07-13 21:15:25.497251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:34.750 [2024-07-13 21:15:25.497302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.497334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.497368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.497398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.497438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.497470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.497502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.497532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.500970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.750 [2024-07-13 21:15:25.501048] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:34.750 [2024-07-13 21:15:25.501110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.501147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.501184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.501216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.501248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.501279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.501311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.501343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.504557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.750 [2024-07-13 21:15:25.504601] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:34.750 [2024-07-13 21:15:25.504652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.504685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.504719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.504750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.504782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.504813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.504845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.504876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.507378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.750 [2024-07-13 21:15:25.507419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:34.750 [2024-07-13 21:15:25.507475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.507508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.507542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.507573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.507606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.507637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.507670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.507701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.510260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.750 [2024-07-13 21:15:25.510302] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:34.750 [2024-07-13 21:15:25.510353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.510385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.510420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.750 [2024-07-13 21:15:25.510450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.750 [2024-07-13 21:15:25.510483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.751 [2024-07-13 21:15:25.510515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.510547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.751 [2024-07-13 21:15:25.510579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.512685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.751 [2024-07-13 21:15:25.512725] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:34.751 [2024-07-13 21:15:25.512776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.751 [2024-07-13 21:15:25.512810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.512843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.751 [2024-07-13 21:15:25.512874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.512906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.751 [2024-07-13 21:15:25.512944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.512977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:34.751 [2024-07-13 21:15:25.513009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:21862 cdw0:873131b0 sqhd:0900 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.515357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:34.751 [2024-07-13 21:15:25.515401] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:34.751 [2024-07-13 21:15:25.518001] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:28:34.751 [2024-07-13 21:15:25.518027] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.751 [2024-07-13 21:15:25.520442] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:28:34.751 [2024-07-13 21:15:25.520486] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.751 [2024-07-13 21:15:25.522728] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:28:34.751 [2024-07-13 21:15:25.522747] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.751 [2024-07-13 21:15:25.525159] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:28:34.751 [2024-07-13 21:15:25.525202] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.751 [2024-07-13 21:15:25.527359] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:28:34.751 [2024-07-13 21:15:25.527378] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.751 [2024-07-13 21:15:25.529430] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:28:34.751 [2024-07-13 21:15:25.529448] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.751 [2024-07-13 21:15:25.531490] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:28:34.751 [2024-07-13 21:15:25.531508] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.751 [2024-07-13 21:15:25.533522] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:28:34.751 [2024-07-13 21:15:25.533564] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.751 [2024-07-13 21:15:25.535658] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:28:34.751 [2024-07-13 21:15:25.535703] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.751 [2024-07-13 21:15:25.535911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.535949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183600 00:28:34.751 [2024-07-13 21:15:25.536440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183f00 00:28:34.751 [2024-07-13 21:15:25.536794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.751 [2024-07-13 21:15:25.536812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.536828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.536847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.536860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.536878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.536892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.536910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.536923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.536941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.536956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.536974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.536986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183f00 00:28:34.752 [2024-07-13 21:15:25.537446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.537975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.537989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.538007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.538062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.538081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x183800 00:28:34.752 [2024-07-13 21:15:25.538094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.752 [2024-07-13 21:15:25.538112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183600 00:28:34.752 [2024-07-13 21:15:25.538125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d63d6000 sqhd:52d0 p:0 m:0 dnr:0 00:28:34.753 [2024-07-13 21:15:25.557033] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:28:34.753 [2024-07-13 21:15:25.557053] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557125] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557139] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557152] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557164] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557176] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557188] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557200] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557212] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557224] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557236] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:34.753 [2024-07-13 21:15:25.557813] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.753 [2024-07-13 21:15:25.557829] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:34.753 [2024-07-13 21:15:25.557844] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:34.753 [2024-07-13 21:15:25.557854] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:34.753 [2024-07-13 21:15:25.557865] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:34.753 [2024-07-13 21:15:25.558146] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:34.753 [2024-07-13 21:15:25.558160] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:34.753 [2024-07-13 21:15:25.558171] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:34.753 [2024-07-13 21:15:25.558192] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:34.753 [2024-07-13 21:15:25.558203] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:34.753 [2024-07-13 21:15:25.581022] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.581084] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.581097] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:34.753 [2024-07-13 21:15:25.581191] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.581206] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.581217] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:28:34.753 [2024-07-13 21:15:25.581296] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.581310] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.581320] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:28:34.753 [2024-07-13 21:15:25.581408] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.581422] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.581432] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:28:34.753 [2024-07-13 21:15:25.581513] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.581527] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.581538] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:28:34.753 [2024-07-13 21:15:25.581687] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.581702] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.581712] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:28:34.753 [2024-07-13 21:15:25.581797] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.581812] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.581822] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:28:34.753 [2024-07-13 21:15:25.581917] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.581931] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.581943] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:28:34.753 [2024-07-13 21:15:25.582060] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.582074] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.582085] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:28:34.753 [2024-07-13 21:15:25.582184] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:34.753 [2024-07-13 21:15:25.582200] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:34.753 [2024-07-13 21:15:25.582210] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:28:34.753 task offset: 38272 on job bdev=Nvme1n1 fails 00:28:34.753 00:28:34.753 Latency(us) 00:28:34.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme1n1 ended in about 1.86 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme1n1 : 1.86 145.90 9.12 34.33 0.00 350765.72 5688.52 1040187.39 00:28:34.753 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme2n1 ended in about 1.87 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme2n1 : 1.87 145.81 9.11 34.31 0.00 347975.09 7130.32 1040187.39 00:28:34.753 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme3n1 ended in about 1.87 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme3n1 : 1.87 154.31 9.64 34.29 0.00 329651.52 9594.47 1040187.39 00:28:34.753 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme4n1 ended in about 1.87 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme4n1 : 1.87 154.22 9.64 34.27 0.00 326906.68 18874.37 1033476.51 00:28:34.753 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme5n1 ended in about 1.87 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme5n1 : 1.87 139.69 8.73 34.25 0.00 351082.12 25375.54 1033476.51 00:28:34.753 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme6n1 ended in about 1.87 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme6n1 : 1.87 154.06 9.63 34.24 0.00 321352.80 29150.41 1033476.51 00:28:34.753 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme7n1 ended in about 1.87 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme7n1 : 1.87 153.98 9.62 34.22 0.00 318644.97 36700.16 1033476.51 00:28:34.753 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme8n1 ended in about 1.87 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme8n1 : 1.87 151.76 9.48 34.20 0.00 319640.31 44669.34 1033476.51 00:28:34.753 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme9n1 ended in about 1.87 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme9n1 : 1.87 141.00 8.81 34.18 0.00 336482.92 35441.87 1033476.51 00:28:34.753 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.753 Job: Nvme10n1 ended in about 1.83 seconds with error 00:28:34.753 Verification LBA range: start 0x0 length 0x400 00:28:34.753 Nvme10n1 : 1.83 69.88 4.37 34.94 0.00 560868.01 59978.55 1067030.94 00:28:34.753 =================================================================================================================== 00:28:34.753 Total : 1410.61 88.16 343.23 0.00 346640.45 5688.52 1067030.94 00:28:34.753 [2024-07-13 21:15:25.604213] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3661801 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:35.322 rmmod nvme_rdma 00:28:35.322 rmmod nvme_fabrics 00:28:35.322 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 3661801 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:35.322 00:28:35.322 real 0m5.119s 00:28:35.322 user 0m17.436s 00:28:35.322 sys 0m1.330s 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:35.322 21:15:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.322 ************************************ 00:28:35.322 END TEST nvmf_shutdown_tc3 00:28:35.322 ************************************ 00:28:35.322 21:15:26 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:35.322 00:28:35.322 real 0m23.337s 00:28:35.322 user 1m5.391s 00:28:35.322 sys 0m9.071s 00:28:35.322 21:15:26 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:35.322 21:15:26 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:35.322 ************************************ 00:28:35.322 END TEST nvmf_shutdown 00:28:35.322 ************************************ 00:28:35.322 21:15:26 nvmf_rdma -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:35.322 21:15:26 nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.322 21:15:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:35.322 21:15:26 nvmf_rdma -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:35.322 21:15:26 nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:35.322 21:15:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:35.322 21:15:26 nvmf_rdma -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:35.322 21:15:26 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:28:35.322 21:15:26 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:35.322 21:15:26 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:35.322 21:15:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:35.322 ************************************ 00:28:35.322 START TEST nvmf_multicontroller 00:28:35.322 ************************************ 00:28:35.322 21:15:26 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:28:35.582 * Looking for test storage... 00:28:35.582 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:35.582 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:28:35.582 00:28:35.582 real 0m0.109s 00:28:35.582 user 0m0.046s 00:28:35.582 sys 0m0.069s 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:35.582 21:15:26 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.582 ************************************ 00:28:35.582 END TEST nvmf_multicontroller 00:28:35.582 ************************************ 00:28:35.582 21:15:26 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:28:35.582 21:15:26 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:35.582 21:15:26 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:35.582 21:15:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:35.582 ************************************ 00:28:35.582 START TEST nvmf_aer 00:28:35.582 ************************************ 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:28:35.582 * Looking for test storage... 00:28:35.582 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:35.582 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:35.583 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.583 21:15:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.583 21:15:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.841 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:35.841 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:35.841 21:15:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:35.841 21:15:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:42.412 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:42.412 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:42.413 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:42.413 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:42.413 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:42.413 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:42.413 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:42.413 altname enp217s0f0np0 00:28:42.413 altname ens818f0np0 00:28:42.413 inet 192.168.100.8/24 scope global mlx_0_0 00:28:42.413 valid_lft forever preferred_lft forever 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:42.413 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:42.413 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:42.413 altname enp217s0f1np1 00:28:42.413 altname ens818f1np1 00:28:42.413 inet 192.168.100.9/24 scope global mlx_0_1 00:28:42.413 valid_lft forever preferred_lft forever 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:42.413 21:15:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:42.413 192.168.100.9' 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:42.413 192.168.100.9' 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:42.413 192.168.100.9' 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:42.413 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3665766 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3665766 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3665766 ']' 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:42.414 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.414 [2024-07-13 21:15:33.110088] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:42.414 [2024-07-13 21:15:33.110138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.414 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.414 [2024-07-13 21:15:33.182840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.414 [2024-07-13 21:15:33.222822] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.414 [2024-07-13 21:15:33.222864] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.414 [2024-07-13 21:15:33.222873] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.414 [2024-07-13 21:15:33.222882] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.414 [2024-07-13 21:15:33.222889] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.414 [2024-07-13 21:15:33.222943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.414 [2024-07-13 21:15:33.223043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.414 [2024-07-13 21:15:33.223062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.414 [2024-07-13 21:15:33.223065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.672 [2024-07-13 21:15:33.402372] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x214bc80/0x2150170) succeed. 00:28:42.672 [2024-07-13 21:15:33.412748] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x214d2c0/0x2191800) succeed. 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.672 Malloc0 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.672 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.930 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.931 [2024-07-13 21:15:33.578439] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:42.931 [ 00:28:42.931 { 00:28:42.931 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:42.931 "subtype": "Discovery", 00:28:42.931 "listen_addresses": [], 00:28:42.931 "allow_any_host": true, 00:28:42.931 "hosts": [] 00:28:42.931 }, 00:28:42.931 { 00:28:42.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.931 "subtype": "NVMe", 00:28:42.931 "listen_addresses": [ 00:28:42.931 { 00:28:42.931 "trtype": "RDMA", 00:28:42.931 "adrfam": "IPv4", 00:28:42.931 "traddr": "192.168.100.8", 00:28:42.931 "trsvcid": "4420" 00:28:42.931 } 00:28:42.931 ], 00:28:42.931 "allow_any_host": true, 00:28:42.931 "hosts": [], 00:28:42.931 "serial_number": "SPDK00000000000001", 00:28:42.931 "model_number": "SPDK bdev Controller", 00:28:42.931 "max_namespaces": 2, 00:28:42.931 "min_cntlid": 1, 00:28:42.931 "max_cntlid": 65519, 00:28:42.931 "namespaces": [ 00:28:42.931 { 00:28:42.931 "nsid": 1, 00:28:42.931 "bdev_name": "Malloc0", 00:28:42.931 "name": "Malloc0", 00:28:42.931 "nguid": "D6EECCE1A45B45329EA6F325E746EA0D", 00:28:42.931 "uuid": "d6eecce1-a45b-4532-9ea6-f325e746ea0d" 00:28:42.931 } 00:28:42.931 ] 00:28:42.931 } 00:28:42.931 ] 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=3665800 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:42.931 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.931 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.190 Malloc1 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.190 [ 00:28:43.190 { 00:28:43.190 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:43.190 "subtype": "Discovery", 00:28:43.190 "listen_addresses": [], 00:28:43.190 "allow_any_host": true, 00:28:43.190 "hosts": [] 00:28:43.190 }, 00:28:43.190 { 00:28:43.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.190 "subtype": "NVMe", 00:28:43.190 "listen_addresses": [ 00:28:43.190 { 00:28:43.190 "trtype": "RDMA", 00:28:43.190 "adrfam": "IPv4", 00:28:43.190 "traddr": "192.168.100.8", 00:28:43.190 "trsvcid": "4420" 00:28:43.190 } 00:28:43.190 ], 00:28:43.190 "allow_any_host": true, 00:28:43.190 "hosts": [], 00:28:43.190 "serial_number": "SPDK00000000000001", 00:28:43.190 "model_number": "SPDK bdev Controller", 00:28:43.190 "max_namespaces": 2, 00:28:43.190 "min_cntlid": 1, 00:28:43.190 "max_cntlid": 65519, 00:28:43.190 "namespaces": [ 00:28:43.190 { 00:28:43.190 "nsid": 1, 00:28:43.190 "bdev_name": "Malloc0", 00:28:43.190 "name": "Malloc0", 00:28:43.190 "nguid": "D6EECCE1A45B45329EA6F325E746EA0D", 00:28:43.190 "uuid": "d6eecce1-a45b-4532-9ea6-f325e746ea0d" 00:28:43.190 }, 00:28:43.190 { 00:28:43.190 "nsid": 2, 00:28:43.190 "bdev_name": "Malloc1", 00:28:43.190 "name": "Malloc1", 00:28:43.190 "nguid": "9368596A65354FB2AE6B4A435EE7BDA8", 00:28:43.190 "uuid": "9368596a-6535-4fb2-ae6b-4a435ee7bda8" 00:28:43.190 } 00:28:43.190 ] 00:28:43.190 } 00:28:43.190 ] 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 3665800 00:28:43.190 Asynchronous Event Request test 00:28:43.190 Attaching to 192.168.100.8 00:28:43.190 Attached to 192.168.100.8 00:28:43.190 Registering asynchronous event callbacks... 00:28:43.190 Starting namespace attribute notice tests for all controllers... 00:28:43.190 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:43.190 aer_cb - Changed Namespace 00:28:43.190 Cleaning up... 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:43.190 21:15:33 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:43.190 rmmod nvme_rdma 00:28:43.190 rmmod nvme_fabrics 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3665766 ']' 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3665766 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3665766 ']' 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3665766 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3665766 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3665766' 00:28:43.190 killing process with pid 3665766 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3665766 00:28:43.190 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3665766 00:28:43.449 21:15:34 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:43.449 21:15:34 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:43.449 00:28:43.449 real 0m7.981s 00:28:43.449 user 0m6.061s 00:28:43.449 sys 0m5.517s 00:28:43.449 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:43.449 21:15:34 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:43.449 ************************************ 00:28:43.449 END TEST nvmf_aer 00:28:43.449 ************************************ 00:28:43.708 21:15:34 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:28:43.708 21:15:34 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:43.708 21:15:34 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:43.708 21:15:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:43.708 ************************************ 00:28:43.708 START TEST nvmf_async_init 00:28:43.708 ************************************ 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:28:43.708 * Looking for test storage... 00:28:43.708 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=075125acb48d4735a8f96bf9dc4e0f77 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:43.708 21:15:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:50.345 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.345 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:50.345 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:50.345 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:50.345 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:50.346 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:50.346 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:50.346 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:50.346 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:50.346 21:15:40 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:50.346 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:50.346 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:50.346 altname enp217s0f0np0 00:28:50.346 altname ens818f0np0 00:28:50.346 inet 192.168.100.8/24 scope global mlx_0_0 00:28:50.346 valid_lft forever preferred_lft forever 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:50.346 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:50.346 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:50.346 altname enp217s0f1np1 00:28:50.346 altname ens818f1np1 00:28:50.346 inet 192.168.100.9/24 scope global mlx_0_1 00:28:50.346 valid_lft forever preferred_lft forever 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:50.346 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:50.347 192.168.100.9' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:50.347 192.168.100.9' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:50.347 192.168.100.9' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3669211 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3669211 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3669211 ']' 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:50.347 21:15:41 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:50.606 [2024-07-13 21:15:41.207743] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:50.606 [2024-07-13 21:15:41.207800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.606 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.606 [2024-07-13 21:15:41.280158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.606 [2024-07-13 21:15:41.318712] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.606 [2024-07-13 21:15:41.318757] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.606 [2024-07-13 21:15:41.318766] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.606 [2024-07-13 21:15:41.318775] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.606 [2024-07-13 21:15:41.318782] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.606 [2024-07-13 21:15:41.318805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.173 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:51.173 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:51.173 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:51.173 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.173 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.173 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.173 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:51.173 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.173 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.431 [2024-07-13 21:15:42.081159] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f57b50/0x1f5c040) succeed. 00:28:51.431 [2024-07-13 21:15:42.090254] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f59050/0x1f9d6d0) succeed. 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.431 null0 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 075125acb48d4735a8f96bf9dc4e0f77 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.431 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.432 [2024-07-13 21:15:42.174548] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.432 nvme0n1 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.432 [ 00:28:51.432 { 00:28:51.432 "name": "nvme0n1", 00:28:51.432 "aliases": [ 00:28:51.432 "075125ac-b48d-4735-a8f9-6bf9dc4e0f77" 00:28:51.432 ], 00:28:51.432 "product_name": "NVMe disk", 00:28:51.432 "block_size": 512, 00:28:51.432 "num_blocks": 2097152, 00:28:51.432 "uuid": "075125ac-b48d-4735-a8f9-6bf9dc4e0f77", 00:28:51.432 "assigned_rate_limits": { 00:28:51.432 "rw_ios_per_sec": 0, 00:28:51.432 "rw_mbytes_per_sec": 0, 00:28:51.432 "r_mbytes_per_sec": 0, 00:28:51.432 "w_mbytes_per_sec": 0 00:28:51.432 }, 00:28:51.432 "claimed": false, 00:28:51.432 "zoned": false, 00:28:51.432 "supported_io_types": { 00:28:51.432 "read": true, 00:28:51.432 "write": true, 00:28:51.432 "unmap": false, 00:28:51.432 "write_zeroes": true, 00:28:51.432 "flush": true, 00:28:51.432 "reset": true, 00:28:51.432 "compare": true, 00:28:51.432 "compare_and_write": true, 00:28:51.432 "abort": true, 00:28:51.432 "nvme_admin": true, 00:28:51.432 "nvme_io": true 00:28:51.432 }, 00:28:51.432 "memory_domains": [ 00:28:51.432 { 00:28:51.432 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:28:51.432 "dma_device_type": 0 00:28:51.432 } 00:28:51.432 ], 00:28:51.432 "driver_specific": { 00:28:51.432 "nvme": [ 00:28:51.432 { 00:28:51.432 "trid": { 00:28:51.432 "trtype": "RDMA", 00:28:51.432 "adrfam": "IPv4", 00:28:51.432 "traddr": "192.168.100.8", 00:28:51.432 "trsvcid": "4420", 00:28:51.432 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:51.432 }, 00:28:51.432 "ctrlr_data": { 00:28:51.432 "cntlid": 1, 00:28:51.432 "vendor_id": "0x8086", 00:28:51.432 "model_number": "SPDK bdev Controller", 00:28:51.432 "serial_number": "00000000000000000000", 00:28:51.432 "firmware_revision": "24.05.1", 00:28:51.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.432 "oacs": { 00:28:51.432 "security": 0, 00:28:51.432 "format": 0, 00:28:51.432 "firmware": 0, 00:28:51.432 "ns_manage": 0 00:28:51.432 }, 00:28:51.432 "multi_ctrlr": true, 00:28:51.432 "ana_reporting": false 00:28:51.432 }, 00:28:51.432 "vs": { 00:28:51.432 "nvme_version": "1.3" 00:28:51.432 }, 00:28:51.432 "ns_data": { 00:28:51.432 "id": 1, 00:28:51.432 "can_share": true 00:28:51.432 } 00:28:51.432 } 00:28:51.432 ], 00:28:51.432 "mp_policy": "active_passive" 00:28:51.432 } 00:28:51.432 } 00:28:51.432 ] 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.432 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.432 [2024-07-13 21:15:42.288664] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:51.432 [2024-07-13 21:15:42.312295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.691 [2024-07-13 21:15:42.333716] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.691 [ 00:28:51.691 { 00:28:51.691 "name": "nvme0n1", 00:28:51.691 "aliases": [ 00:28:51.691 "075125ac-b48d-4735-a8f9-6bf9dc4e0f77" 00:28:51.691 ], 00:28:51.691 "product_name": "NVMe disk", 00:28:51.691 "block_size": 512, 00:28:51.691 "num_blocks": 2097152, 00:28:51.691 "uuid": "075125ac-b48d-4735-a8f9-6bf9dc4e0f77", 00:28:51.691 "assigned_rate_limits": { 00:28:51.691 "rw_ios_per_sec": 0, 00:28:51.691 "rw_mbytes_per_sec": 0, 00:28:51.691 "r_mbytes_per_sec": 0, 00:28:51.691 "w_mbytes_per_sec": 0 00:28:51.691 }, 00:28:51.691 "claimed": false, 00:28:51.691 "zoned": false, 00:28:51.691 "supported_io_types": { 00:28:51.691 "read": true, 00:28:51.691 "write": true, 00:28:51.691 "unmap": false, 00:28:51.691 "write_zeroes": true, 00:28:51.691 "flush": true, 00:28:51.691 "reset": true, 00:28:51.691 "compare": true, 00:28:51.691 "compare_and_write": true, 00:28:51.691 "abort": true, 00:28:51.691 "nvme_admin": true, 00:28:51.691 "nvme_io": true 00:28:51.691 }, 00:28:51.691 "memory_domains": [ 00:28:51.691 { 00:28:51.691 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:28:51.691 "dma_device_type": 0 00:28:51.691 } 00:28:51.691 ], 00:28:51.691 "driver_specific": { 00:28:51.691 "nvme": [ 00:28:51.691 { 00:28:51.691 "trid": { 00:28:51.691 "trtype": "RDMA", 00:28:51.691 "adrfam": "IPv4", 00:28:51.691 "traddr": "192.168.100.8", 00:28:51.691 "trsvcid": "4420", 00:28:51.691 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:51.691 }, 00:28:51.691 "ctrlr_data": { 00:28:51.691 "cntlid": 2, 00:28:51.691 "vendor_id": "0x8086", 00:28:51.691 "model_number": "SPDK bdev Controller", 00:28:51.691 "serial_number": "00000000000000000000", 00:28:51.691 "firmware_revision": "24.05.1", 00:28:51.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.691 "oacs": { 00:28:51.691 "security": 0, 00:28:51.691 "format": 0, 00:28:51.691 "firmware": 0, 00:28:51.691 "ns_manage": 0 00:28:51.691 }, 00:28:51.691 "multi_ctrlr": true, 00:28:51.691 "ana_reporting": false 00:28:51.691 }, 00:28:51.691 "vs": { 00:28:51.691 "nvme_version": "1.3" 00:28:51.691 }, 00:28:51.691 "ns_data": { 00:28:51.691 "id": 1, 00:28:51.691 "can_share": true 00:28:51.691 } 00:28:51.691 } 00:28:51.691 ], 00:28:51.691 "mp_policy": "active_passive" 00:28:51.691 } 00:28:51.691 } 00:28:51.691 ] 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.s0I0BHctvN 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.s0I0BHctvN 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.691 [2024-07-13 21:15:42.417162] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s0I0BHctvN 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s0I0BHctvN 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.691 [2024-07-13 21:15:42.437199] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:51.691 nvme0n1 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.691 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.691 [ 00:28:51.691 { 00:28:51.691 "name": "nvme0n1", 00:28:51.691 "aliases": [ 00:28:51.691 "075125ac-b48d-4735-a8f9-6bf9dc4e0f77" 00:28:51.692 ], 00:28:51.692 "product_name": "NVMe disk", 00:28:51.692 "block_size": 512, 00:28:51.692 "num_blocks": 2097152, 00:28:51.692 "uuid": "075125ac-b48d-4735-a8f9-6bf9dc4e0f77", 00:28:51.692 "assigned_rate_limits": { 00:28:51.692 "rw_ios_per_sec": 0, 00:28:51.692 "rw_mbytes_per_sec": 0, 00:28:51.692 "r_mbytes_per_sec": 0, 00:28:51.692 "w_mbytes_per_sec": 0 00:28:51.692 }, 00:28:51.692 "claimed": false, 00:28:51.692 "zoned": false, 00:28:51.692 "supported_io_types": { 00:28:51.692 "read": true, 00:28:51.692 "write": true, 00:28:51.692 "unmap": false, 00:28:51.692 "write_zeroes": true, 00:28:51.692 "flush": true, 00:28:51.692 "reset": true, 00:28:51.692 "compare": true, 00:28:51.692 "compare_and_write": true, 00:28:51.692 "abort": true, 00:28:51.692 "nvme_admin": true, 00:28:51.692 "nvme_io": true 00:28:51.692 }, 00:28:51.692 "memory_domains": [ 00:28:51.692 { 00:28:51.692 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:28:51.692 "dma_device_type": 0 00:28:51.692 } 00:28:51.692 ], 00:28:51.692 "driver_specific": { 00:28:51.692 "nvme": [ 00:28:51.692 { 00:28:51.692 "trid": { 00:28:51.692 "trtype": "RDMA", 00:28:51.692 "adrfam": "IPv4", 00:28:51.692 "traddr": "192.168.100.8", 00:28:51.692 "trsvcid": "4421", 00:28:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:51.692 }, 00:28:51.692 "ctrlr_data": { 00:28:51.692 "cntlid": 3, 00:28:51.692 "vendor_id": "0x8086", 00:28:51.692 "model_number": "SPDK bdev Controller", 00:28:51.692 "serial_number": "00000000000000000000", 00:28:51.692 "firmware_revision": "24.05.1", 00:28:51.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.692 "oacs": { 00:28:51.692 "security": 0, 00:28:51.692 "format": 0, 00:28:51.692 "firmware": 0, 00:28:51.692 "ns_manage": 0 00:28:51.692 }, 00:28:51.692 "multi_ctrlr": true, 00:28:51.692 "ana_reporting": false 00:28:51.692 }, 00:28:51.692 "vs": { 00:28:51.692 "nvme_version": "1.3" 00:28:51.692 }, 00:28:51.692 "ns_data": { 00:28:51.692 "id": 1, 00:28:51.692 "can_share": true 00:28:51.692 } 00:28:51.692 } 00:28:51.692 ], 00:28:51.692 "mp_policy": "active_passive" 00:28:51.692 } 00:28:51.692 } 00:28:51.692 ] 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.s0I0BHctvN 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:51.692 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:51.692 rmmod nvme_rdma 00:28:51.951 rmmod nvme_fabrics 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3669211 ']' 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3669211 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3669211 ']' 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3669211 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3669211 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3669211' 00:28:51.951 killing process with pid 3669211 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3669211 00:28:51.951 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3669211 00:28:52.210 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:52.210 21:15:42 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:52.210 00:28:52.210 real 0m8.447s 00:28:52.210 user 0m3.735s 00:28:52.210 sys 0m5.447s 00:28:52.210 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:52.210 21:15:42 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:52.210 ************************************ 00:28:52.210 END TEST nvmf_async_init 00:28:52.210 ************************************ 00:28:52.210 21:15:42 nvmf_rdma -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:28:52.210 21:15:42 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:52.210 21:15:42 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:52.210 21:15:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:52.210 ************************************ 00:28:52.210 START TEST dma 00:28:52.210 ************************************ 00:28:52.210 21:15:42 nvmf_rdma.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:28:52.210 * Looking for test storage... 00:28:52.210 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:52.210 21:15:43 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:52.210 21:15:43 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.210 21:15:43 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.210 21:15:43 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.210 21:15:43 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.210 21:15:43 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.210 21:15:43 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.210 21:15:43 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:28:52.210 21:15:43 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:52.210 21:15:43 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:28:52.210 21:15:43 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:28:52.210 21:15:43 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:28:52.210 21:15:43 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:28:52.210 21:15:43 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.210 21:15:43 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:52.210 21:15:43 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:52.210 21:15:43 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:28:52.210 21:15:43 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.783 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:58.784 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:58.784 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:58.784 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:58.784 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:58.784 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:58.784 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:58.784 altname enp217s0f0np0 00:28:58.784 altname ens818f0np0 00:28:58.784 inet 192.168.100.8/24 scope global mlx_0_0 00:28:58.784 valid_lft forever preferred_lft forever 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:58.784 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:58.784 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:58.784 altname enp217s0f1np1 00:28:58.784 altname ens818f1np1 00:28:58.784 inet 192.168.100.9/24 scope global mlx_0_1 00:28:58.784 valid_lft forever preferred_lft forever 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:58.784 192.168.100.9' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:58.784 192.168.100.9' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:58.784 192.168.100.9' 00:28:58.784 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:58.785 21:15:49 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:58.785 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:58.785 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=3672659 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:58.785 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 3672659 00:28:58.785 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@827 -- # '[' -z 3672659 ']' 00:28:58.785 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.785 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:58.785 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.785 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:58.785 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:28:58.785 [2024-07-13 21:15:49.618617] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:58.785 [2024-07-13 21:15:49.618667] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.785 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.044 [2024-07-13 21:15:49.690161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:59.044 [2024-07-13 21:15:49.729754] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.044 [2024-07-13 21:15:49.729792] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.044 [2024-07-13 21:15:49.729804] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.044 [2024-07-13 21:15:49.729813] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.044 [2024-07-13 21:15:49.729820] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.044 [2024-07-13 21:15:49.729863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.044 [2024-07-13 21:15:49.729866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.044 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:59.044 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@860 -- # return 0 00:28:59.044 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.044 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.044 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:28:59.044 21:15:49 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.044 21:15:49 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:59.044 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.044 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:28:59.044 [2024-07-13 21:15:49.888307] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x792630/0x796b20) succeed. 00:28:59.044 [2024-07-13 21:15:49.897450] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x793b30/0x7d81b0) succeed. 00:28:59.304 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.304 21:15:49 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:28:59.304 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.304 21:15:49 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:28:59.304 Malloc0 00:28:59.304 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.304 21:15:50 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:28:59.304 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.304 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:28:59.304 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.304 21:15:50 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:28:59.304 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.304 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:28:59.304 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.304 21:15:50 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:28:59.304 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.305 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:28:59.305 [2024-07-13 21:15:50.051457] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:59.305 21:15:50 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.305 21:15:50 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:28:59.305 21:15:50 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:28:59.305 21:15:50 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:28:59.305 21:15:50 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:28:59.305 21:15:50 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:59.305 21:15:50 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:59.305 { 00:28:59.305 "params": { 00:28:59.305 "name": "Nvme$subsystem", 00:28:59.305 "trtype": "$TEST_TRANSPORT", 00:28:59.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.305 "adrfam": "ipv4", 00:28:59.305 "trsvcid": "$NVMF_PORT", 00:28:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.305 "hdgst": ${hdgst:-false}, 00:28:59.305 "ddgst": ${ddgst:-false} 00:28:59.305 }, 00:28:59.305 "method": "bdev_nvme_attach_controller" 00:28:59.305 } 00:28:59.305 EOF 00:28:59.305 )") 00:28:59.305 21:15:50 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:28:59.305 21:15:50 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:28:59.305 21:15:50 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:28:59.305 21:15:50 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:59.305 "params": { 00:28:59.305 "name": "Nvme0", 00:28:59.305 "trtype": "rdma", 00:28:59.305 "traddr": "192.168.100.8", 00:28:59.305 "adrfam": "ipv4", 00:28:59.305 "trsvcid": "4420", 00:28:59.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:59.305 "hdgst": false, 00:28:59.305 "ddgst": false 00:28:59.305 }, 00:28:59.305 "method": "bdev_nvme_attach_controller" 00:28:59.305 }' 00:28:59.305 [2024-07-13 21:15:50.101357] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:59.305 [2024-07-13 21:15:50.101408] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672820 ] 00:28:59.305 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.305 [2024-07-13 21:15:50.171909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:59.564 [2024-07-13 21:15:50.211526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.564 [2024-07-13 21:15:50.211528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.836 bdev Nvme0n1 reports 1 memory domains 00:29:04.836 bdev Nvme0n1 supports RDMA memory domain 00:29:04.836 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:04.836 ========================================================================== 00:29:04.836 Latency [us] 00:29:04.836 IOPS MiB/s Average min max 00:29:04.836 Core 2: 21754.79 84.98 734.81 254.86 8446.00 00:29:04.836 Core 3: 21870.98 85.43 730.85 268.53 8541.91 00:29:04.836 ========================================================================== 00:29:04.836 Total : 43625.77 170.41 732.82 254.86 8541.91 00:29:04.836 00:29:04.836 Total operations: 218153, translate 218153 pull_push 0 memzero 0 00:29:04.836 21:15:55 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:29:04.836 21:15:55 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:29:04.836 21:15:55 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:29:04.836 [2024-07-13 21:15:55.638621] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:04.836 [2024-07-13 21:15:55.638679] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673738 ] 00:29:04.836 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.836 [2024-07-13 21:15:55.705830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:05.095 [2024-07-13 21:15:55.744606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.095 [2024-07-13 21:15:55.744608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.366 bdev Malloc0 reports 2 memory domains 00:29:10.366 bdev Malloc0 doesn't support RDMA memory domain 00:29:10.366 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:10.366 ========================================================================== 00:29:10.366 Latency [us] 00:29:10.366 IOPS MiB/s Average min max 00:29:10.366 Core 2: 14794.42 57.79 1080.76 485.50 1868.27 00:29:10.366 Core 3: 14915.18 58.26 1072.00 437.10 1828.93 00:29:10.366 ========================================================================== 00:29:10.366 Total : 29709.61 116.05 1076.36 437.10 1868.27 00:29:10.366 00:29:10.366 Total operations: 148600, translate 0 pull_push 594400 memzero 0 00:29:10.366 21:16:01 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:29:10.366 21:16:01 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:29:10.366 21:16:01 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:29:10.366 21:16:01 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:29:10.366 Ignoring -M option 00:29:10.366 [2024-07-13 21:16:01.071516] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:10.366 [2024-07-13 21:16:01.071571] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674548 ] 00:29:10.366 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.366 [2024-07-13 21:16:01.142020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:10.367 [2024-07-13 21:16:01.178445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.367 [2024-07-13 21:16:01.178448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.928 bdev a6f2fbeb-7cd7-41ff-a4d9-9e5f53a512a7 reports 1 memory domains 00:29:16.928 bdev a6f2fbeb-7cd7-41ff-a4d9-9e5f53a512a7 supports RDMA memory domain 00:29:16.928 Initialization complete, running randread IO for 5 sec on 2 cores 00:29:16.928 ========================================================================== 00:29:16.928 Latency [us] 00:29:16.928 IOPS MiB/s Average min max 00:29:16.928 Core 2: 80175.83 313.19 198.79 81.44 1737.12 00:29:16.928 Core 3: 83067.88 324.48 191.85 80.69 1648.74 00:29:16.928 ========================================================================== 00:29:16.928 Total : 163243.72 637.67 195.26 80.69 1737.12 00:29:16.928 00:29:16.928 Total operations: 816318, translate 0 pull_push 0 memzero 816318 00:29:16.928 21:16:06 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:29:16.928 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.928 [2024-07-13 21:16:06.724062] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:18.303 Initializing NVMe Controllers 00:29:18.303 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:29:18.303 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:18.303 Initialization complete. Launching workers. 00:29:18.303 ======================================================== 00:29:18.303 Latency(us) 00:29:18.303 Device Information : IOPS MiB/s Average min max 00:29:18.303 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.08 6982.41 7995.58 00:29:18.303 ======================================================== 00:29:18.303 Total : 2016.00 7.88 7972.08 6982.41 7995.58 00:29:18.303 00:29:18.303 21:16:09 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:29:18.303 21:16:09 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:29:18.303 21:16:09 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:29:18.303 21:16:09 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:29:18.303 [2024-07-13 21:16:09.062913] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:18.303 [2024-07-13 21:16:09.062968] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3675870 ] 00:29:18.303 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.303 [2024-07-13 21:16:09.129734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:18.303 [2024-07-13 21:16:09.168892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.303 [2024-07-13 21:16:09.168895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.881 bdev 80beb719-7707-4351-b0d1-0e0283b6d001 reports 1 memory domains 00:29:24.881 bdev 80beb719-7707-4351-b0d1-0e0283b6d001 supports RDMA memory domain 00:29:24.881 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:24.881 ========================================================================== 00:29:24.881 Latency [us] 00:29:24.881 IOPS MiB/s Average min max 00:29:24.881 Core 2: 19175.67 74.90 833.72 17.00 10912.16 00:29:24.881 Core 3: 19466.99 76.04 821.21 10.55 10399.16 00:29:24.881 ========================================================================== 00:29:24.881 Total : 38642.66 150.95 827.42 10.55 10912.16 00:29:24.881 00:29:24.881 Total operations: 193265, translate 193136 pull_push 0 memzero 129 00:29:24.881 21:16:14 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:24.881 21:16:14 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:24.881 rmmod nvme_rdma 00:29:24.881 rmmod nvme_fabrics 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 3672659 ']' 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 3672659 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@946 -- # '[' -z 3672659 ']' 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@950 -- # kill -0 3672659 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@951 -- # uname 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3672659 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3672659' 00:29:24.881 killing process with pid 3672659 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@965 -- # kill 3672659 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@970 -- # wait 3672659 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:24.881 21:16:14 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:24.881 00:29:24.881 real 0m32.062s 00:29:24.881 user 1m34.717s 00:29:24.881 sys 0m6.178s 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:24.881 21:16:14 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:24.881 ************************************ 00:29:24.881 END TEST dma 00:29:24.881 ************************************ 00:29:24.881 21:16:15 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:29:24.881 21:16:15 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:24.881 21:16:15 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:24.881 21:16:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:24.881 ************************************ 00:29:24.881 START TEST nvmf_identify 00:29:24.882 ************************************ 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:29:24.882 * Looking for test storage... 00:29:24.882 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:24.882 21:16:15 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:31.490 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:31.490 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:31.491 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:31.491 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:31.491 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:31.491 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:31.491 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:31.491 altname enp217s0f0np0 00:29:31.491 altname ens818f0np0 00:29:31.491 inet 192.168.100.8/24 scope global mlx_0_0 00:29:31.491 valid_lft forever preferred_lft forever 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:31.491 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:31.491 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:31.491 altname enp217s0f1np1 00:29:31.491 altname ens818f1np1 00:29:31.491 inet 192.168.100.9/24 scope global mlx_0_1 00:29:31.491 valid_lft forever preferred_lft forever 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:31.491 192.168.100.9' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:31.491 192.168.100.9' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:31.491 192.168.100.9' 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:29:31.491 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3680089 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3680089 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3680089 ']' 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:31.492 21:16:21 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:31.492 [2024-07-13 21:16:21.918704] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:31.492 [2024-07-13 21:16:21.918755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.492 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.492 [2024-07-13 21:16:21.988197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.492 [2024-07-13 21:16:22.028841] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.492 [2024-07-13 21:16:22.028885] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.492 [2024-07-13 21:16:22.028895] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.492 [2024-07-13 21:16:22.028904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.492 [2024-07-13 21:16:22.028911] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.492 [2024-07-13 21:16:22.028968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.492 [2024-07-13 21:16:22.029079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.492 [2024-07-13 21:16:22.029102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.492 [2024-07-13 21:16:22.029104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.086 [2024-07-13 21:16:22.759454] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21e7c80/0x21ec170) succeed. 00:29:32.086 [2024-07-13 21:16:22.769999] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21e92c0/0x222d800) succeed. 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.086 Malloc0 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.086 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.350 [2024-07-13 21:16:22.980511] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.350 21:16:22 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.350 [ 00:29:32.350 { 00:29:32.350 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:32.350 "subtype": "Discovery", 00:29:32.350 "listen_addresses": [ 00:29:32.350 { 00:29:32.350 "trtype": "RDMA", 00:29:32.350 "adrfam": "IPv4", 00:29:32.350 "traddr": "192.168.100.8", 00:29:32.350 "trsvcid": "4420" 00:29:32.350 } 00:29:32.350 ], 00:29:32.350 "allow_any_host": true, 00:29:32.350 "hosts": [] 00:29:32.350 }, 00:29:32.350 { 00:29:32.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.350 "subtype": "NVMe", 00:29:32.350 "listen_addresses": [ 00:29:32.350 { 00:29:32.350 "trtype": "RDMA", 00:29:32.350 "adrfam": "IPv4", 00:29:32.350 "traddr": "192.168.100.8", 00:29:32.350 "trsvcid": "4420" 00:29:32.350 } 00:29:32.350 ], 00:29:32.350 "allow_any_host": true, 00:29:32.350 "hosts": [], 00:29:32.350 "serial_number": "SPDK00000000000001", 00:29:32.350 "model_number": "SPDK bdev Controller", 00:29:32.350 "max_namespaces": 32, 00:29:32.350 "min_cntlid": 1, 00:29:32.350 "max_cntlid": 65519, 00:29:32.350 "namespaces": [ 00:29:32.350 { 00:29:32.350 "nsid": 1, 00:29:32.350 "bdev_name": "Malloc0", 00:29:32.350 "name": "Malloc0", 00:29:32.350 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:32.350 "eui64": "ABCDEF0123456789", 00:29:32.350 "uuid": "ceb6f7a6-3cfd-4519-b3c2-0742c0cf4e72" 00:29:32.350 } 00:29:32.350 ] 00:29:32.350 } 00:29:32.350 ] 00:29:32.350 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.350 21:16:23 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:32.350 [2024-07-13 21:16:23.037294] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:32.350 [2024-07-13 21:16:23.037333] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3680371 ] 00:29:32.350 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.350 [2024-07-13 21:16:23.086220] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:32.350 [2024-07-13 21:16:23.086299] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:29:32.350 [2024-07-13 21:16:23.086316] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:29:32.350 [2024-07-13 21:16:23.086321] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:29:32.350 [2024-07-13 21:16:23.086352] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:32.350 [2024-07-13 21:16:23.105557] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:29:32.350 [2024-07-13 21:16:23.115684] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:32.350 [2024-07-13 21:16:23.115695] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:29:32.350 [2024-07-13 21:16:23.115703] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115710] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115716] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115722] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115729] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115735] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115741] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115747] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115753] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115759] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115766] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115772] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115778] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115784] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115790] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115797] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115803] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115809] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115815] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115824] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115831] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115837] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115843] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115850] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115856] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115862] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115868] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115874] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115880] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115887] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115893] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115899] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:29:32.350 [2024-07-13 21:16:23.115904] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:32.350 [2024-07-13 21:16:23.115909] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:29:32.350 [2024-07-13 21:16:23.115925] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.350 [2024-07-13 21:16:23.115938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182100 00:29:32.350 [2024-07-13 21:16:23.121016] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.350 [2024-07-13 21:16:23.121027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121034] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121042] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:32.351 [2024-07-13 21:16:23.121049] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:32.351 [2024-07-13 21:16:23.121056] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:32.351 [2024-07-13 21:16:23.121071] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.351 [2024-07-13 21:16:23.121104] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121116] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:32.351 [2024-07-13 21:16:23.121122] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121129] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:32.351 [2024-07-13 21:16:23.121139] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.351 [2024-07-13 21:16:23.121167] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121179] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:32.351 [2024-07-13 21:16:23.121185] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121193] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:32.351 [2024-07-13 21:16:23.121201] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.351 [2024-07-13 21:16:23.121231] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121244] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:32.351 [2024-07-13 21:16:23.121250] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121258] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.351 [2024-07-13 21:16:23.121282] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121294] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:32.351 [2024-07-13 21:16:23.121300] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:32.351 [2024-07-13 21:16:23.121306] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121312] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:32.351 [2024-07-13 21:16:23.121419] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:32.351 [2024-07-13 21:16:23.121425] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:32.351 [2024-07-13 21:16:23.121434] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.351 [2024-07-13 21:16:23.121459] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121471] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:32.351 [2024-07-13 21:16:23.121479] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121487] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.351 [2024-07-13 21:16:23.121511] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121522] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:32.351 [2024-07-13 21:16:23.121528] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:32.351 [2024-07-13 21:16:23.121534] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121541] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:32.351 [2024-07-13 21:16:23.121549] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:32.351 [2024-07-13 21:16:23.121558] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182100 00:29:32.351 [2024-07-13 21:16:23.121609] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121623] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:32.351 [2024-07-13 21:16:23.121629] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:32.351 [2024-07-13 21:16:23.121635] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:32.351 [2024-07-13 21:16:23.121642] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:32.351 [2024-07-13 21:16:23.121648] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:32.351 [2024-07-13 21:16:23.121654] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:32.351 [2024-07-13 21:16:23.121659] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121669] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:32.351 [2024-07-13 21:16:23.121677] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.351 [2024-07-13 21:16:23.121709] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121723] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.351 [2024-07-13 21:16:23.121739] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.351 [2024-07-13 21:16:23.121753] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.351 [2024-07-13 21:16:23.121767] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.351 [2024-07-13 21:16:23.121780] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:32.351 [2024-07-13 21:16:23.121785] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121794] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:32.351 [2024-07-13 21:16:23.121801] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.351 [2024-07-13 21:16:23.121830] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121843] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:32.351 [2024-07-13 21:16:23.121849] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:32.351 [2024-07-13 21:16:23.121855] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121863] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182100 00:29:32.351 [2024-07-13 21:16:23.121898] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.351 [2024-07-13 21:16:23.121904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:32.351 [2024-07-13 21:16:23.121911] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182100 00:29:32.351 [2024-07-13 21:16:23.121921] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:32.352 [2024-07-13 21:16:23.121941] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.352 [2024-07-13 21:16:23.121949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x182100 00:29:32.352 [2024-07-13 21:16:23.121957] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182100 00:29:32.352 [2024-07-13 21:16:23.121967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.352 [2024-07-13 21:16:23.121985] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.352 [2024-07-13 21:16:23.121991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:32.352 [2024-07-13 21:16:23.122001] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182100 00:29:32.352 [2024-07-13 21:16:23.122009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x182100 00:29:32.352 [2024-07-13 21:16:23.122020] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182100 00:29:32.352 [2024-07-13 21:16:23.122026] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.352 [2024-07-13 21:16:23.122031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:32.352 [2024-07-13 21:16:23.122038] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182100 00:29:32.352 [2024-07-13 21:16:23.122044] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.352 [2024-07-13 21:16:23.122049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:32.352 [2024-07-13 21:16:23.122060] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182100 00:29:32.352 [2024-07-13 21:16:23.122067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x182100 00:29:32.352 [2024-07-13 21:16:23.122073] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182100 00:29:32.352 [2024-07-13 21:16:23.122092] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.352 [2024-07-13 21:16:23.122098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:32.352 [2024-07-13 21:16:23.122108] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182100 00:29:32.352 ===================================================== 00:29:32.352 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:32.352 ===================================================== 00:29:32.352 Controller Capabilities/Features 00:29:32.352 ================================ 00:29:32.352 Vendor ID: 0000 00:29:32.352 Subsystem Vendor ID: 0000 00:29:32.352 Serial Number: .................... 00:29:32.352 Model Number: ........................................ 00:29:32.352 Firmware Version: 24.05.1 00:29:32.352 Recommended Arb Burst: 0 00:29:32.352 IEEE OUI Identifier: 00 00 00 00:29:32.352 Multi-path I/O 00:29:32.352 May have multiple subsystem ports: No 00:29:32.352 May have multiple controllers: No 00:29:32.352 Associated with SR-IOV VF: No 00:29:32.352 Max Data Transfer Size: 131072 00:29:32.352 Max Number of Namespaces: 0 00:29:32.352 Max Number of I/O Queues: 1024 00:29:32.352 NVMe Specification Version (VS): 1.3 00:29:32.352 NVMe Specification Version (Identify): 1.3 00:29:32.352 Maximum Queue Entries: 128 00:29:32.352 Contiguous Queues Required: Yes 00:29:32.352 Arbitration Mechanisms Supported 00:29:32.352 Weighted Round Robin: Not Supported 00:29:32.352 Vendor Specific: Not Supported 00:29:32.352 Reset Timeout: 15000 ms 00:29:32.352 Doorbell Stride: 4 bytes 00:29:32.352 NVM Subsystem Reset: Not Supported 00:29:32.352 Command Sets Supported 00:29:32.352 NVM Command Set: Supported 00:29:32.352 Boot Partition: Not Supported 00:29:32.352 Memory Page Size Minimum: 4096 bytes 00:29:32.352 Memory Page Size Maximum: 4096 bytes 00:29:32.352 Persistent Memory Region: Not Supported 00:29:32.352 Optional Asynchronous Events Supported 00:29:32.352 Namespace Attribute Notices: Not Supported 00:29:32.352 Firmware Activation Notices: Not Supported 00:29:32.352 ANA Change Notices: Not Supported 00:29:32.352 PLE Aggregate Log Change Notices: Not Supported 00:29:32.352 LBA Status Info Alert Notices: Not Supported 00:29:32.352 EGE Aggregate Log Change Notices: Not Supported 00:29:32.352 Normal NVM Subsystem Shutdown event: Not Supported 00:29:32.352 Zone Descriptor Change Notices: Not Supported 00:29:32.352 Discovery Log Change Notices: Supported 00:29:32.352 Controller Attributes 00:29:32.352 128-bit Host Identifier: Not Supported 00:29:32.352 Non-Operational Permissive Mode: Not Supported 00:29:32.352 NVM Sets: Not Supported 00:29:32.352 Read Recovery Levels: Not Supported 00:29:32.352 Endurance Groups: Not Supported 00:29:32.352 Predictable Latency Mode: Not Supported 00:29:32.352 Traffic Based Keep ALive: Not Supported 00:29:32.352 Namespace Granularity: Not Supported 00:29:32.352 SQ Associations: Not Supported 00:29:32.352 UUID List: Not Supported 00:29:32.352 Multi-Domain Subsystem: Not Supported 00:29:32.352 Fixed Capacity Management: Not Supported 00:29:32.352 Variable Capacity Management: Not Supported 00:29:32.352 Delete Endurance Group: Not Supported 00:29:32.352 Delete NVM Set: Not Supported 00:29:32.352 Extended LBA Formats Supported: Not Supported 00:29:32.352 Flexible Data Placement Supported: Not Supported 00:29:32.352 00:29:32.352 Controller Memory Buffer Support 00:29:32.352 ================================ 00:29:32.352 Supported: No 00:29:32.352 00:29:32.352 Persistent Memory Region Support 00:29:32.352 ================================ 00:29:32.352 Supported: No 00:29:32.352 00:29:32.352 Admin Command Set Attributes 00:29:32.352 ============================ 00:29:32.352 Security Send/Receive: Not Supported 00:29:32.352 Format NVM: Not Supported 00:29:32.352 Firmware Activate/Download: Not Supported 00:29:32.352 Namespace Management: Not Supported 00:29:32.352 Device Self-Test: Not Supported 00:29:32.352 Directives: Not Supported 00:29:32.352 NVMe-MI: Not Supported 00:29:32.352 Virtualization Management: Not Supported 00:29:32.352 Doorbell Buffer Config: Not Supported 00:29:32.352 Get LBA Status Capability: Not Supported 00:29:32.352 Command & Feature Lockdown Capability: Not Supported 00:29:32.352 Abort Command Limit: 1 00:29:32.352 Async Event Request Limit: 4 00:29:32.352 Number of Firmware Slots: N/A 00:29:32.352 Firmware Slot 1 Read-Only: N/A 00:29:32.352 Firmware Activation Without Reset: N/A 00:29:32.352 Multiple Update Detection Support: N/A 00:29:32.352 Firmware Update Granularity: No Information Provided 00:29:32.352 Per-Namespace SMART Log: No 00:29:32.352 Asymmetric Namespace Access Log Page: Not Supported 00:29:32.352 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:32.352 Command Effects Log Page: Not Supported 00:29:32.352 Get Log Page Extended Data: Supported 00:29:32.352 Telemetry Log Pages: Not Supported 00:29:32.352 Persistent Event Log Pages: Not Supported 00:29:32.352 Supported Log Pages Log Page: May Support 00:29:32.352 Commands Supported & Effects Log Page: Not Supported 00:29:32.352 Feature Identifiers & Effects Log Page:May Support 00:29:32.352 NVMe-MI Commands & Effects Log Page: May Support 00:29:32.352 Data Area 4 for Telemetry Log: Not Supported 00:29:32.352 Error Log Page Entries Supported: 128 00:29:32.352 Keep Alive: Not Supported 00:29:32.352 00:29:32.352 NVM Command Set Attributes 00:29:32.352 ========================== 00:29:32.352 Submission Queue Entry Size 00:29:32.352 Max: 1 00:29:32.352 Min: 1 00:29:32.352 Completion Queue Entry Size 00:29:32.352 Max: 1 00:29:32.352 Min: 1 00:29:32.352 Number of Namespaces: 0 00:29:32.352 Compare Command: Not Supported 00:29:32.352 Write Uncorrectable Command: Not Supported 00:29:32.352 Dataset Management Command: Not Supported 00:29:32.352 Write Zeroes Command: Not Supported 00:29:32.352 Set Features Save Field: Not Supported 00:29:32.352 Reservations: Not Supported 00:29:32.352 Timestamp: Not Supported 00:29:32.352 Copy: Not Supported 00:29:32.352 Volatile Write Cache: Not Present 00:29:32.352 Atomic Write Unit (Normal): 1 00:29:32.352 Atomic Write Unit (PFail): 1 00:29:32.352 Atomic Compare & Write Unit: 1 00:29:32.352 Fused Compare & Write: Supported 00:29:32.352 Scatter-Gather List 00:29:32.352 SGL Command Set: Supported 00:29:32.352 SGL Keyed: Supported 00:29:32.352 SGL Bit Bucket Descriptor: Not Supported 00:29:32.352 SGL Metadata Pointer: Not Supported 00:29:32.352 Oversized SGL: Not Supported 00:29:32.352 SGL Metadata Address: Not Supported 00:29:32.352 SGL Offset: Supported 00:29:32.352 Transport SGL Data Block: Not Supported 00:29:32.352 Replay Protected Memory Block: Not Supported 00:29:32.352 00:29:32.352 Firmware Slot Information 00:29:32.352 ========================= 00:29:32.352 Active slot: 0 00:29:32.352 00:29:32.352 00:29:32.352 Error Log 00:29:32.352 ========= 00:29:32.352 00:29:32.352 Active Namespaces 00:29:32.352 ================= 00:29:32.352 Discovery Log Page 00:29:32.352 ================== 00:29:32.352 Generation Counter: 2 00:29:32.352 Number of Records: 2 00:29:32.352 Record Format: 0 00:29:32.352 00:29:32.352 Discovery Log Entry 0 00:29:32.352 ---------------------- 00:29:32.352 Transport Type: 1 (RDMA) 00:29:32.352 Address Family: 1 (IPv4) 00:29:32.352 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:32.352 Entry Flags: 00:29:32.352 Duplicate Returned Information: 1 00:29:32.352 Explicit Persistent Connection Support for Discovery: 1 00:29:32.352 Transport Requirements: 00:29:32.352 Secure Channel: Not Required 00:29:32.352 Port ID: 0 (0x0000) 00:29:32.353 Controller ID: 65535 (0xffff) 00:29:32.353 Admin Max SQ Size: 128 00:29:32.353 Transport Service Identifier: 4420 00:29:32.353 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:32.353 Transport Address: 192.168.100.8 00:29:32.353 Transport Specific Address Subtype - RDMA 00:29:32.353 RDMA QP Service Type: 1 (Reliable Connected) 00:29:32.353 RDMA Provider Type: 1 (No provider specified) 00:29:32.353 RDMA CM Service: 1 (RDMA_CM) 00:29:32.353 Discovery Log Entry 1 00:29:32.353 ---------------------- 00:29:32.353 Transport Type: 1 (RDMA) 00:29:32.353 Address Family: 1 (IPv4) 00:29:32.353 Subsystem Type: 2 (NVM Subsystem) 00:29:32.353 Entry Flags: 00:29:32.353 Duplicate Returned Information: 0 00:29:32.353 Explicit Persistent Connection Support for Discovery: 0 00:29:32.353 Transport Requirements: 00:29:32.353 Secure Channel: Not Required 00:29:32.353 Port ID: 0 (0x0000) 00:29:32.353 Controller ID: 65535 (0xffff) 00:29:32.353 Admin Max SQ Size: [2024-07-13 21:16:23.122180] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:32.353 [2024-07-13 21:16:23.122191] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47143 doesn't match qid 00:29:32.353 [2024-07-13 21:16:23.122205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32687 cdw0:5 sqhd:65f0 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122212] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47143 doesn't match qid 00:29:32.353 [2024-07-13 21:16:23.122220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32687 cdw0:5 sqhd:65f0 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122227] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47143 doesn't match qid 00:29:32.353 [2024-07-13 21:16:23.122236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32687 cdw0:5 sqhd:65f0 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122243] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47143 doesn't match qid 00:29:32.353 [2024-07-13 21:16:23.122251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32687 cdw0:5 sqhd:65f0 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122260] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122289] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122305] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122319] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122335] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122351] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:32.353 [2024-07-13 21:16:23.122358] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:32.353 [2024-07-13 21:16:23.122364] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122374] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122408] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122424] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122434] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122464] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122477] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122487] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122511] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122525] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122535] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122563] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122577] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122586] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122612] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122627] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122637] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122661] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122674] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122685] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122711] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122724] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122734] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122766] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122778] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122787] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122819] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122831] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122840] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122869] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122880] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122890] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122914] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122926] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122934] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.122965] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.353 [2024-07-13 21:16:23.122971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:29:32.353 [2024-07-13 21:16:23.122977] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122986] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.353 [2024-07-13 21:16:23.122994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.353 [2024-07-13 21:16:23.123015] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123027] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123035] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123065] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123076] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123085] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123113] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123124] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123133] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123162] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123174] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123184] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123217] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123229] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123238] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123267] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123279] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123287] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123311] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123322] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123331] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123356] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123368] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123377] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123406] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123417] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123426] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123453] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123468] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123477] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123500] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123512] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123521] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123546] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123558] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123566] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123597] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123609] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123618] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123641] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:29:32.354 [2024-07-13 21:16:23.123653] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123662] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.354 [2024-07-13 21:16:23.123669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.354 [2024-07-13 21:16:23.123695] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.354 [2024-07-13 21:16:23.123700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.123706] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123715] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.123744] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.123750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.123757] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123766] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.123791] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.123797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.123803] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123812] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.123839] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.123844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.123850] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123859] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.123886] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.123892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.123898] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123907] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.123932] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.123937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.123944] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123952] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.123976] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.123981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.123987] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.123996] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124025] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124039] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124047] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124079] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124090] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124099] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124130] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124142] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124151] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124181] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124193] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124202] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124231] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124243] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124251] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124282] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124294] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124303] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124326] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124339] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124348] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124373] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124385] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124393] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124421] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124432] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124441] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124466] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124478] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124487] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124512] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124524] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124532] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124563] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124575] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124584] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.355 [2024-07-13 21:16:23.124612] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.355 [2024-07-13 21:16:23.124618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:29:32.355 [2024-07-13 21:16:23.124624] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182100 00:29:32.355 [2024-07-13 21:16:23.124633] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.356 [2024-07-13 21:16:23.124656] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.356 [2024-07-13 21:16:23.124661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:32.356 [2024-07-13 21:16:23.124668] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124676] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.356 [2024-07-13 21:16:23.124707] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.356 [2024-07-13 21:16:23.124713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:32.356 [2024-07-13 21:16:23.124719] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124728] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.356 [2024-07-13 21:16:23.124753] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.356 [2024-07-13 21:16:23.124759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:29:32.356 [2024-07-13 21:16:23.124765] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124774] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.356 [2024-07-13 21:16:23.124803] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.356 [2024-07-13 21:16:23.124808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:29:32.356 [2024-07-13 21:16:23.124814] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124823] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.356 [2024-07-13 21:16:23.124846] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.356 [2024-07-13 21:16:23.124852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:29:32.356 [2024-07-13 21:16:23.124858] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124867] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.356 [2024-07-13 21:16:23.124894] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.356 [2024-07-13 21:16:23.124899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:29:32.356 [2024-07-13 21:16:23.124906] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124914] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.356 [2024-07-13 21:16:23.124939] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.356 [2024-07-13 21:16:23.124945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:29:32.356 [2024-07-13 21:16:23.124951] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124960] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.124967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.356 [2024-07-13 21:16:23.124987] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.356 [2024-07-13 21:16:23.124992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:32.356 [2024-07-13 21:16:23.124999] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.125007] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.129023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.356 [2024-07-13 21:16:23.129044] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.356 [2024-07-13 21:16:23.129050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:29:32.356 [2024-07-13 21:16:23.129056] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182100 00:29:32.356 [2024-07-13 21:16:23.129063] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:29:32.356 128 00:29:32.356 Transport Service Identifier: 4420 00:29:32.356 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:32.356 Transport Address: 192.168.100.8 00:29:32.356 Transport Specific Address Subtype - RDMA 00:29:32.356 RDMA QP Service Type: 1 (Reliable Connected) 00:29:32.356 RDMA Provider Type: 1 (No provider specified) 00:29:32.356 RDMA CM Service: 1 (RDMA_CM) 00:29:32.356 21:16:23 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:32.356 [2024-07-13 21:16:23.201926] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:32.356 [2024-07-13 21:16:23.201971] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3680377 ] 00:29:32.356 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.619 [2024-07-13 21:16:23.248777] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:32.619 [2024-07-13 21:16:23.248850] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:29:32.619 [2024-07-13 21:16:23.248864] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:29:32.619 [2024-07-13 21:16:23.248869] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:29:32.619 [2024-07-13 21:16:23.248893] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:32.619 [2024-07-13 21:16:23.258446] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:29:32.619 [2024-07-13 21:16:23.273140] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:32.619 [2024-07-13 21:16:23.273150] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:29:32.619 [2024-07-13 21:16:23.273157] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273164] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273171] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273177] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273183] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273190] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273196] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273202] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273208] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273214] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273221] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273227] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273233] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273239] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273246] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273252] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273258] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273264] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273270] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273277] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273283] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273289] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273295] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273302] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273310] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273317] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273323] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273329] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273335] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273341] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273348] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273353] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:29:32.619 [2024-07-13 21:16:23.273359] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:32.619 [2024-07-13 21:16:23.273363] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:29:32.619 [2024-07-13 21:16:23.273378] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.273389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182100 00:29:32.619 [2024-07-13 21:16:23.279018] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.619 [2024-07-13 21:16:23.279027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:32.619 [2024-07-13 21:16:23.279035] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.279042] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:32.619 [2024-07-13 21:16:23.279048] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:32.619 [2024-07-13 21:16:23.279054] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:32.619 [2024-07-13 21:16:23.279068] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.619 [2024-07-13 21:16:23.279076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.619 [2024-07-13 21:16:23.279099] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.619 [2024-07-13 21:16:23.279105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:29:32.619 [2024-07-13 21:16:23.279111] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:32.619 [2024-07-13 21:16:23.279117] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279124] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:32.620 [2024-07-13 21:16:23.279132] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.620 [2024-07-13 21:16:23.279157] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.279163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.279169] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:32.620 [2024-07-13 21:16:23.279177] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279185] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:32.620 [2024-07-13 21:16:23.279192] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.620 [2024-07-13 21:16:23.279217] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.279223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.279229] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:32.620 [2024-07-13 21:16:23.279235] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279244] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.620 [2024-07-13 21:16:23.279268] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.279273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.279279] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:32.620 [2024-07-13 21:16:23.279285] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:32.620 [2024-07-13 21:16:23.279291] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279298] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:32.620 [2024-07-13 21:16:23.279404] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:32.620 [2024-07-13 21:16:23.279409] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:32.620 [2024-07-13 21:16:23.279417] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.620 [2024-07-13 21:16:23.279448] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.279454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.279460] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:32.620 [2024-07-13 21:16:23.279466] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279474] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.620 [2024-07-13 21:16:23.279500] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.279506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.279513] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:32.620 [2024-07-13 21:16:23.279519] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279525] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279532] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:32.620 [2024-07-13 21:16:23.279544] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279553] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182100 00:29:32.620 [2024-07-13 21:16:23.279596] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.279602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.279610] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:32.620 [2024-07-13 21:16:23.279616] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:32.620 [2024-07-13 21:16:23.279621] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:32.620 [2024-07-13 21:16:23.279627] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:32.620 [2024-07-13 21:16:23.279632] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:32.620 [2024-07-13 21:16:23.279638] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279644] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279654] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279662] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.620 [2024-07-13 21:16:23.279691] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.279696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.279704] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.620 [2024-07-13 21:16:23.279719] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.620 [2024-07-13 21:16:23.279732] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.620 [2024-07-13 21:16:23.279746] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.620 [2024-07-13 21:16:23.279761] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279767] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279775] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279782] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.620 [2024-07-13 21:16:23.279812] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.279817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.279824] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:32.620 [2024-07-13 21:16:23.279830] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279836] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279843] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279850] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279857] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.620 [2024-07-13 21:16:23.279886] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.279891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.279943] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279949] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279957] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.279965] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.279972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182100 00:29:32.620 [2024-07-13 21:16:23.280004] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.280009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.280025] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:32.620 [2024-07-13 21:16:23.280036] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280044] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.280051] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280059] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.280067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182100 00:29:32.620 [2024-07-13 21:16:23.280090] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.280096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.280106] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280113] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.280120] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280128] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.280136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182100 00:29:32.620 [2024-07-13 21:16:23.280161] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.620 [2024-07-13 21:16:23.280167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:32.620 [2024-07-13 21:16:23.280175] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280182] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.280189] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280198] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280205] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280211] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280217] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:32.620 [2024-07-13 21:16:23.280223] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:32.620 [2024-07-13 21:16:23.280229] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:32.620 [2024-07-13 21:16:23.280246] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.280254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.620 [2024-07-13 21:16:23.280261] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182100 00:29:32.620 [2024-07-13 21:16:23.280268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.620 [2024-07-13 21:16:23.280279] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.621 [2024-07-13 21:16:23.280286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:32.621 [2024-07-13 21:16:23.280293] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280299] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.621 [2024-07-13 21:16:23.280305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:32.621 [2024-07-13 21:16:23.280311] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280320] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.621 [2024-07-13 21:16:23.280344] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.621 [2024-07-13 21:16:23.280350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:32.621 [2024-07-13 21:16:23.280356] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280365] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.621 [2024-07-13 21:16:23.280392] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.621 [2024-07-13 21:16:23.280398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:32.621 [2024-07-13 21:16:23.280404] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280413] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.621 [2024-07-13 21:16:23.280440] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.621 [2024-07-13 21:16:23.280446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:29:32.621 [2024-07-13 21:16:23.280452] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280462] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x182100 00:29:32.621 [2024-07-13 21:16:23.280478] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x182100 00:29:32.621 [2024-07-13 21:16:23.280493] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x182100 00:29:32.621 [2024-07-13 21:16:23.280509] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x182100 00:29:32.621 [2024-07-13 21:16:23.280526] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.621 [2024-07-13 21:16:23.280531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:32.621 [2024-07-13 21:16:23.280543] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280550] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.621 [2024-07-13 21:16:23.280555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:32.621 [2024-07-13 21:16:23.280565] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280571] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.621 [2024-07-13 21:16:23.280577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:32.621 [2024-07-13 21:16:23.280586] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182100 00:29:32.621 [2024-07-13 21:16:23.280592] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.621 [2024-07-13 21:16:23.280597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:32.621 [2024-07-13 21:16:23.280607] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182100 00:29:32.621 ===================================================== 00:29:32.621 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.621 ===================================================== 00:29:32.621 Controller Capabilities/Features 00:29:32.621 ================================ 00:29:32.621 Vendor ID: 8086 00:29:32.621 Subsystem Vendor ID: 8086 00:29:32.621 Serial Number: SPDK00000000000001 00:29:32.621 Model Number: SPDK bdev Controller 00:29:32.621 Firmware Version: 24.05.1 00:29:32.621 Recommended Arb Burst: 6 00:29:32.621 IEEE OUI Identifier: e4 d2 5c 00:29:32.621 Multi-path I/O 00:29:32.621 May have multiple subsystem ports: Yes 00:29:32.621 May have multiple controllers: Yes 00:29:32.621 Associated with SR-IOV VF: No 00:29:32.621 Max Data Transfer Size: 131072 00:29:32.621 Max Number of Namespaces: 32 00:29:32.621 Max Number of I/O Queues: 127 00:29:32.621 NVMe Specification Version (VS): 1.3 00:29:32.621 NVMe Specification Version (Identify): 1.3 00:29:32.621 Maximum Queue Entries: 128 00:29:32.621 Contiguous Queues Required: Yes 00:29:32.621 Arbitration Mechanisms Supported 00:29:32.621 Weighted Round Robin: Not Supported 00:29:32.621 Vendor Specific: Not Supported 00:29:32.621 Reset Timeout: 15000 ms 00:29:32.621 Doorbell Stride: 4 bytes 00:29:32.621 NVM Subsystem Reset: Not Supported 00:29:32.621 Command Sets Supported 00:29:32.621 NVM Command Set: Supported 00:29:32.621 Boot Partition: Not Supported 00:29:32.621 Memory Page Size Minimum: 4096 bytes 00:29:32.621 Memory Page Size Maximum: 4096 bytes 00:29:32.621 Persistent Memory Region: Not Supported 00:29:32.621 Optional Asynchronous Events Supported 00:29:32.621 Namespace Attribute Notices: Supported 00:29:32.621 Firmware Activation Notices: Not Supported 00:29:32.621 ANA Change Notices: Not Supported 00:29:32.621 PLE Aggregate Log Change Notices: Not Supported 00:29:32.621 LBA Status Info Alert Notices: Not Supported 00:29:32.621 EGE Aggregate Log Change Notices: Not Supported 00:29:32.621 Normal NVM Subsystem Shutdown event: Not Supported 00:29:32.621 Zone Descriptor Change Notices: Not Supported 00:29:32.621 Discovery Log Change Notices: Not Supported 00:29:32.621 Controller Attributes 00:29:32.621 128-bit Host Identifier: Supported 00:29:32.621 Non-Operational Permissive Mode: Not Supported 00:29:32.621 NVM Sets: Not Supported 00:29:32.621 Read Recovery Levels: Not Supported 00:29:32.621 Endurance Groups: Not Supported 00:29:32.621 Predictable Latency Mode: Not Supported 00:29:32.621 Traffic Based Keep ALive: Not Supported 00:29:32.621 Namespace Granularity: Not Supported 00:29:32.621 SQ Associations: Not Supported 00:29:32.621 UUID List: Not Supported 00:29:32.621 Multi-Domain Subsystem: Not Supported 00:29:32.621 Fixed Capacity Management: Not Supported 00:29:32.621 Variable Capacity Management: Not Supported 00:29:32.621 Delete Endurance Group: Not Supported 00:29:32.621 Delete NVM Set: Not Supported 00:29:32.621 Extended LBA Formats Supported: Not Supported 00:29:32.621 Flexible Data Placement Supported: Not Supported 00:29:32.621 00:29:32.621 Controller Memory Buffer Support 00:29:32.621 ================================ 00:29:32.621 Supported: No 00:29:32.621 00:29:32.621 Persistent Memory Region Support 00:29:32.621 ================================ 00:29:32.621 Supported: No 00:29:32.621 00:29:32.621 Admin Command Set Attributes 00:29:32.621 ============================ 00:29:32.621 Security Send/Receive: Not Supported 00:29:32.621 Format NVM: Not Supported 00:29:32.621 Firmware Activate/Download: Not Supported 00:29:32.621 Namespace Management: Not Supported 00:29:32.621 Device Self-Test: Not Supported 00:29:32.621 Directives: Not Supported 00:29:32.621 NVMe-MI: Not Supported 00:29:32.621 Virtualization Management: Not Supported 00:29:32.621 Doorbell Buffer Config: Not Supported 00:29:32.621 Get LBA Status Capability: Not Supported 00:29:32.621 Command & Feature Lockdown Capability: Not Supported 00:29:32.621 Abort Command Limit: 4 00:29:32.621 Async Event Request Limit: 4 00:29:32.621 Number of Firmware Slots: N/A 00:29:32.621 Firmware Slot 1 Read-Only: N/A 00:29:32.621 Firmware Activation Without Reset: N/A 00:29:32.621 Multiple Update Detection Support: N/A 00:29:32.621 Firmware Update Granularity: No Information Provided 00:29:32.621 Per-Namespace SMART Log: No 00:29:32.621 Asymmetric Namespace Access Log Page: Not Supported 00:29:32.621 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:32.621 Command Effects Log Page: Supported 00:29:32.621 Get Log Page Extended Data: Supported 00:29:32.621 Telemetry Log Pages: Not Supported 00:29:32.621 Persistent Event Log Pages: Not Supported 00:29:32.621 Supported Log Pages Log Page: May Support 00:29:32.621 Commands Supported & Effects Log Page: Not Supported 00:29:32.621 Feature Identifiers & Effects Log Page:May Support 00:29:32.621 NVMe-MI Commands & Effects Log Page: May Support 00:29:32.621 Data Area 4 for Telemetry Log: Not Supported 00:29:32.621 Error Log Page Entries Supported: 128 00:29:32.621 Keep Alive: Supported 00:29:32.621 Keep Alive Granularity: 10000 ms 00:29:32.621 00:29:32.621 NVM Command Set Attributes 00:29:32.621 ========================== 00:29:32.621 Submission Queue Entry Size 00:29:32.621 Max: 64 00:29:32.621 Min: 64 00:29:32.621 Completion Queue Entry Size 00:29:32.621 Max: 16 00:29:32.621 Min: 16 00:29:32.621 Number of Namespaces: 32 00:29:32.621 Compare Command: Supported 00:29:32.621 Write Uncorrectable Command: Not Supported 00:29:32.621 Dataset Management Command: Supported 00:29:32.621 Write Zeroes Command: Supported 00:29:32.621 Set Features Save Field: Not Supported 00:29:32.621 Reservations: Supported 00:29:32.621 Timestamp: Not Supported 00:29:32.621 Copy: Supported 00:29:32.621 Volatile Write Cache: Present 00:29:32.621 Atomic Write Unit (Normal): 1 00:29:32.621 Atomic Write Unit (PFail): 1 00:29:32.621 Atomic Compare & Write Unit: 1 00:29:32.621 Fused Compare & Write: Supported 00:29:32.621 Scatter-Gather List 00:29:32.621 SGL Command Set: Supported 00:29:32.621 SGL Keyed: Supported 00:29:32.621 SGL Bit Bucket Descriptor: Not Supported 00:29:32.621 SGL Metadata Pointer: Not Supported 00:29:32.621 Oversized SGL: Not Supported 00:29:32.622 SGL Metadata Address: Not Supported 00:29:32.622 SGL Offset: Supported 00:29:32.622 Transport SGL Data Block: Not Supported 00:29:32.622 Replay Protected Memory Block: Not Supported 00:29:32.622 00:29:32.622 Firmware Slot Information 00:29:32.622 ========================= 00:29:32.622 Active slot: 1 00:29:32.622 Slot 1 Firmware Revision: 24.05.1 00:29:32.622 00:29:32.622 00:29:32.622 Commands Supported and Effects 00:29:32.622 ============================== 00:29:32.622 Admin Commands 00:29:32.622 -------------- 00:29:32.622 Get Log Page (02h): Supported 00:29:32.622 Identify (06h): Supported 00:29:32.622 Abort (08h): Supported 00:29:32.622 Set Features (09h): Supported 00:29:32.622 Get Features (0Ah): Supported 00:29:32.622 Asynchronous Event Request (0Ch): Supported 00:29:32.622 Keep Alive (18h): Supported 00:29:32.622 I/O Commands 00:29:32.622 ------------ 00:29:32.622 Flush (00h): Supported LBA-Change 00:29:32.622 Write (01h): Supported LBA-Change 00:29:32.622 Read (02h): Supported 00:29:32.622 Compare (05h): Supported 00:29:32.622 Write Zeroes (08h): Supported LBA-Change 00:29:32.622 Dataset Management (09h): Supported LBA-Change 00:29:32.622 Copy (19h): Supported LBA-Change 00:29:32.622 Unknown (79h): Supported LBA-Change 00:29:32.622 Unknown (7Ah): Supported 00:29:32.622 00:29:32.622 Error Log 00:29:32.622 ========= 00:29:32.622 00:29:32.622 Arbitration 00:29:32.622 =========== 00:29:32.622 Arbitration Burst: 1 00:29:32.622 00:29:32.622 Power Management 00:29:32.622 ================ 00:29:32.622 Number of Power States: 1 00:29:32.622 Current Power State: Power State #0 00:29:32.622 Power State #0: 00:29:32.622 Max Power: 0.00 W 00:29:32.622 Non-Operational State: Operational 00:29:32.622 Entry Latency: Not Reported 00:29:32.622 Exit Latency: Not Reported 00:29:32.622 Relative Read Throughput: 0 00:29:32.622 Relative Read Latency: 0 00:29:32.622 Relative Write Throughput: 0 00:29:32.622 Relative Write Latency: 0 00:29:32.622 Idle Power: Not Reported 00:29:32.622 Active Power: Not Reported 00:29:32.622 Non-Operational Permissive Mode: Not Supported 00:29:32.622 00:29:32.622 Health Information 00:29:32.622 ================== 00:29:32.622 Critical Warnings: 00:29:32.622 Available Spare Space: OK 00:29:32.622 Temperature: OK 00:29:32.622 Device Reliability: OK 00:29:32.622 Read Only: No 00:29:32.622 Volatile Memory Backup: OK 00:29:32.622 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:32.622 Temperature Threshol[2024-07-13 21:16:23.280689] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.280697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.280721] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.280726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.280733] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.280756] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:32.622 [2024-07-13 21:16:23.280765] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 41299 doesn't match qid 00:29:32.622 [2024-07-13 21:16:23.280780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32679 cdw0:5 sqhd:05f0 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.280786] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 41299 doesn't match qid 00:29:32.622 [2024-07-13 21:16:23.280794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32679 cdw0:5 sqhd:05f0 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.280801] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 41299 doesn't match qid 00:29:32.622 [2024-07-13 21:16:23.280809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32679 cdw0:5 sqhd:05f0 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.280815] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 41299 doesn't match qid 00:29:32.622 [2024-07-13 21:16:23.280823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32679 cdw0:5 sqhd:05f0 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.280832] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.280840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.280856] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.280864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.280872] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.280879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.280886] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.280908] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.280913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.280919] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:32.622 [2024-07-13 21:16:23.280926] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:32.622 [2024-07-13 21:16:23.280932] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.280941] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.280949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.280969] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.280974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.280981] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.280991] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.280999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.281019] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.281025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.281032] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281040] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.281071] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.281077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.281084] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281092] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.281126] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.281132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.281139] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281149] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.281180] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.281187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.281193] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281202] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.281227] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.281233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.281240] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281249] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.281277] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.281283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.281289] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281298] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.281324] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.281329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.281335] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281344] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.281368] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.281374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.281381] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281390] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.622 [2024-07-13 21:16:23.281415] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.622 [2024-07-13 21:16:23.281421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:29:32.622 [2024-07-13 21:16:23.281427] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182100 00:29:32.622 [2024-07-13 21:16:23.281437] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281469] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281480] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281489] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281516] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281528] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281537] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281562] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281574] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281583] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281606] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281618] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281627] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281660] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281672] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281681] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281712] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281725] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281734] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281759] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281771] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281780] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281807] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281819] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281827] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281855] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281866] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281875] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281906] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281918] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281927] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.281954] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.281959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.281966] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281974] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.281982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.282001] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.282007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.282018] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282027] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.282051] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.282056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.282062] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282071] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.282094] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.282100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.282106] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282115] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.282142] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.282148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.282154] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282162] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.282188] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.282193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.282200] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282208] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.282236] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.282241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:29:32.623 [2024-07-13 21:16:23.282247] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282256] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.623 [2024-07-13 21:16:23.282264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.623 [2024-07-13 21:16:23.282287] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.623 [2024-07-13 21:16:23.282294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282300] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282309] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282332] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282344] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282353] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282376] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282388] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282396] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282422] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282434] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282442] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282468] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282479] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282488] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282517] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282529] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282538] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282569] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282582] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282590] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282620] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282631] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282640] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282667] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282679] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282688] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282711] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282723] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282732] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282755] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282767] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282775] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282803] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282814] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282823] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282848] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282860] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282868] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282892] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282904] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282912] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282938] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282949] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282958] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.282966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.282986] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.282991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.282997] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.283006] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.287020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:32.624 [2024-07-13 21:16:23.287039] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:32.624 [2024-07-13 21:16:23.287044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:29:32.624 [2024-07-13 21:16:23.287051] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182100 00:29:32.624 [2024-07-13 21:16:23.287058] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:29:32.624 d: 0 Kelvin (-273 Celsius) 00:29:32.624 Available Spare: 0% 00:29:32.624 Available Spare Threshold: 0% 00:29:32.624 Life Percentage Used: 0% 00:29:32.624 Data Units Read: 0 00:29:32.624 Data Units Written: 0 00:29:32.624 Host Read Commands: 0 00:29:32.624 Host Write Commands: 0 00:29:32.624 Controller Busy Time: 0 minutes 00:29:32.624 Power Cycles: 0 00:29:32.624 Power On Hours: 0 hours 00:29:32.624 Unsafe Shutdowns: 0 00:29:32.624 Unrecoverable Media Errors: 0 00:29:32.624 Lifetime Error Log Entries: 0 00:29:32.624 Warning Temperature Time: 0 minutes 00:29:32.624 Critical Temperature Time: 0 minutes 00:29:32.624 00:29:32.624 Number of Queues 00:29:32.624 ================ 00:29:32.624 Number of I/O Submission Queues: 127 00:29:32.624 Number of I/O Completion Queues: 127 00:29:32.624 00:29:32.624 Active Namespaces 00:29:32.624 ================= 00:29:32.624 Namespace ID:1 00:29:32.624 Error Recovery Timeout: Unlimited 00:29:32.624 Command Set Identifier: NVM (00h) 00:29:32.624 Deallocate: Supported 00:29:32.624 Deallocated/Unwritten Error: Not Supported 00:29:32.624 Deallocated Read Value: Unknown 00:29:32.624 Deallocate in Write Zeroes: Not Supported 00:29:32.624 Deallocated Guard Field: 0xFFFF 00:29:32.624 Flush: Supported 00:29:32.624 Reservation: Supported 00:29:32.624 Namespace Sharing Capabilities: Multiple Controllers 00:29:32.624 Size (in LBAs): 131072 (0GiB) 00:29:32.624 Capacity (in LBAs): 131072 (0GiB) 00:29:32.624 Utilization (in LBAs): 131072 (0GiB) 00:29:32.624 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:32.624 EUI64: ABCDEF0123456789 00:29:32.624 UUID: ceb6f7a6-3cfd-4519-b3c2-0742c0cf4e72 00:29:32.624 Thin Provisioning: Not Supported 00:29:32.624 Per-NS Atomic Units: Yes 00:29:32.624 Atomic Boundary Size (Normal): 0 00:29:32.624 Atomic Boundary Size (PFail): 0 00:29:32.624 Atomic Boundary Offset: 0 00:29:32.624 Maximum Single Source Range Length: 65535 00:29:32.624 Maximum Copy Length: 65535 00:29:32.624 Maximum Source Range Count: 1 00:29:32.624 NGUID/EUI64 Never Reused: No 00:29:32.624 Namespace Write Protected: No 00:29:32.625 Number of LBA Formats: 1 00:29:32.625 Current LBA Format: LBA Format #00 00:29:32.625 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:32.625 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:32.625 rmmod nvme_rdma 00:29:32.625 rmmod nvme_fabrics 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3680089 ']' 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3680089 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3680089 ']' 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3680089 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3680089 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3680089' 00:29:32.625 killing process with pid 3680089 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3680089 00:29:32.625 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3680089 00:29:32.884 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:32.884 21:16:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:32.884 00:29:32.884 real 0m8.638s 00:29:32.884 user 0m8.541s 00:29:32.884 sys 0m5.555s 00:29:32.884 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:32.884 21:16:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:32.884 ************************************ 00:29:32.884 END TEST nvmf_identify 00:29:32.884 ************************************ 00:29:32.884 21:16:23 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:29:32.884 21:16:23 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:32.884 21:16:23 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:32.884 21:16:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:33.143 ************************************ 00:29:33.143 START TEST nvmf_perf 00:29:33.143 ************************************ 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:29:33.143 * Looking for test storage... 00:29:33.143 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.143 21:16:23 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:33.144 21:16:23 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:39.714 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:39.714 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:39.714 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:39.714 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:39.714 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:39.714 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:39.714 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:39.715 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:39.715 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:39.715 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:39.715 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:39.715 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:39.715 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:39.715 altname enp217s0f0np0 00:29:39.715 altname ens818f0np0 00:29:39.715 inet 192.168.100.8/24 scope global mlx_0_0 00:29:39.715 valid_lft forever preferred_lft forever 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:39.715 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:39.715 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:39.715 altname enp217s0f1np1 00:29:39.715 altname ens818f1np1 00:29:39.715 inet 192.168.100.9/24 scope global mlx_0_1 00:29:39.715 valid_lft forever preferred_lft forever 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:39.715 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:39.716 192.168.100.9' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:39.716 192.168.100.9' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:39.716 192.168.100.9' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3683764 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3683764 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3683764 ']' 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:39.716 21:16:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:39.975 [2024-07-13 21:16:30.629169] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:39.975 [2024-07-13 21:16:30.629221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.975 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.975 [2024-07-13 21:16:30.702588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:39.975 [2024-07-13 21:16:30.743122] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.975 [2024-07-13 21:16:30.743162] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.975 [2024-07-13 21:16:30.743173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.975 [2024-07-13 21:16:30.743182] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.975 [2024-07-13 21:16:30.743190] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.975 [2024-07-13 21:16:30.743241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.975 [2024-07-13 21:16:30.743340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.975 [2024-07-13 21:16:30.743424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:39.975 [2024-07-13 21:16:30.743426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.543 21:16:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:40.543 21:16:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:29:40.543 21:16:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:40.543 21:16:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.543 21:16:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:40.802 21:16:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.802 21:16:31 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:40.802 21:16:31 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:44.091 21:16:34 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:44.091 21:16:34 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:44.091 21:16:34 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:29:44.091 21:16:34 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:44.091 21:16:34 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:44.091 21:16:34 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:29:44.091 21:16:34 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:44.091 21:16:34 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:29:44.091 21:16:34 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:29:44.351 [2024-07-13 21:16:35.029579] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:29:44.351 [2024-07-13 21:16:35.052057] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2186f00/0x2195640) succeed. 00:29:44.351 [2024-07-13 21:16:35.062732] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2188540/0x2215680) succeed. 00:29:44.351 21:16:35 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:44.610 21:16:35 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:44.610 21:16:35 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:44.869 21:16:35 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:44.869 21:16:35 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:44.869 21:16:35 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:45.127 [2024-07-13 21:16:35.866015] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:45.127 21:16:35 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:45.386 21:16:36 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:29:45.386 21:16:36 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:29:45.386 21:16:36 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:45.386 21:16:36 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:29:46.765 Initializing NVMe Controllers 00:29:46.765 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:29:46.765 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:29:46.765 Initialization complete. Launching workers. 00:29:46.765 ======================================================== 00:29:46.765 Latency(us) 00:29:46.765 Device Information : IOPS MiB/s Average min max 00:29:46.765 PCIE (0000:d8:00.0) NSID 1 from core 0: 102118.58 398.90 312.95 39.05 4271.99 00:29:46.765 ======================================================== 00:29:46.765 Total : 102118.58 398.90 312.95 39.05 4271.99 00:29:46.765 00:29:46.765 21:16:37 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:46.765 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.055 Initializing NVMe Controllers 00:29:50.055 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.055 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.055 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:50.055 Initialization complete. Launching workers. 00:29:50.055 ======================================================== 00:29:50.055 Latency(us) 00:29:50.055 Device Information : IOPS MiB/s Average min max 00:29:50.055 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6695.99 26.16 149.14 46.11 4147.65 00:29:50.055 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5172.99 20.21 193.11 74.47 4120.44 00:29:50.055 ======================================================== 00:29:50.055 Total : 11868.99 46.36 168.31 46.11 4147.65 00:29:50.055 00:29:50.055 21:16:40 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:50.055 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.344 Initializing NVMe Controllers 00:29:53.344 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.344 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:53.344 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:53.344 Initialization complete. Launching workers. 00:29:53.344 ======================================================== 00:29:53.344 Latency(us) 00:29:53.344 Device Information : IOPS MiB/s Average min max 00:29:53.344 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18369.00 71.75 1742.23 503.44 6361.19 00:29:53.344 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.28 5938.54 9009.22 00:29:53.344 ======================================================== 00:29:53.344 Total : 22401.00 87.50 2863.41 503.44 9009.22 00:29:53.344 00:29:53.344 21:16:44 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:29:53.344 21:16:44 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:53.344 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.608 Initializing NVMe Controllers 00:29:57.608 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.608 Controller IO queue size 128, less than required. 00:29:57.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.608 Controller IO queue size 128, less than required. 00:29:57.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.608 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.608 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:57.608 Initialization complete. Launching workers. 00:29:57.608 ======================================================== 00:29:57.608 Latency(us) 00:29:57.608 Device Information : IOPS MiB/s Average min max 00:29:57.608 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3990.32 997.58 32123.51 10521.23 67892.92 00:29:57.608 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4018.27 1004.57 31645.93 13826.75 51388.18 00:29:57.608 ======================================================== 00:29:57.608 Total : 8008.59 2002.15 31883.88 10521.23 67892.92 00:29:57.608 00:29:57.608 21:16:48 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:29:57.867 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.126 No valid NVMe controllers or AIO or URING devices found 00:29:58.126 Initializing NVMe Controllers 00:29:58.126 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.126 Controller IO queue size 128, less than required. 00:29:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.126 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:58.126 Controller IO queue size 128, less than required. 00:29:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.126 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:58.126 WARNING: Some requested NVMe devices were skipped 00:29:58.126 21:16:48 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:29:58.126 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.317 Initializing NVMe Controllers 00:30:02.317 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.317 Controller IO queue size 128, less than required. 00:30:02.317 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.317 Controller IO queue size 128, less than required. 00:30:02.317 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.317 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:02.317 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:02.317 Initialization complete. Launching workers. 00:30:02.317 00:30:02.317 ==================== 00:30:02.317 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:02.317 RDMA transport: 00:30:02.317 dev name: mlx5_0 00:30:02.317 polls: 412405 00:30:02.317 idle_polls: 408783 00:30:02.317 completions: 45342 00:30:02.317 queued_requests: 1 00:30:02.317 total_send_wrs: 22671 00:30:02.317 send_doorbell_updates: 3387 00:30:02.317 total_recv_wrs: 22798 00:30:02.317 recv_doorbell_updates: 3392 00:30:02.317 --------------------------------- 00:30:02.317 00:30:02.317 ==================== 00:30:02.317 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:02.317 RDMA transport: 00:30:02.317 dev name: mlx5_0 00:30:02.317 polls: 414391 00:30:02.317 idle_polls: 414110 00:30:02.317 completions: 20178 00:30:02.317 queued_requests: 1 00:30:02.317 total_send_wrs: 10089 00:30:02.317 send_doorbell_updates: 259 00:30:02.317 total_recv_wrs: 10216 00:30:02.317 recv_doorbell_updates: 260 00:30:02.317 --------------------------------- 00:30:02.317 ======================================================== 00:30:02.317 Latency(us) 00:30:02.317 Device Information : IOPS MiB/s Average min max 00:30:02.317 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5667.50 1416.88 22657.07 11218.51 53890.91 00:30:02.317 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2522.00 630.50 50790.08 29358.15 75976.84 00:30:02.317 ======================================================== 00:30:02.317 Total : 8189.50 2047.38 31320.78 11218.51 75976.84 00:30:02.317 00:30:02.577 21:16:53 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:02.577 21:16:53 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:02.577 21:16:53 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:02.577 21:16:53 nvmf_rdma.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:30:02.577 21:16:53 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c79257d2-0cd3-423a-98de-f4e434b78c9d 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c79257d2-0cd3-423a-98de-f4e434b78c9d 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=c79257d2-0cd3-423a-98de-f4e434b78c9d 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:09.145 { 00:30:09.145 "uuid": "c79257d2-0cd3-423a-98de-f4e434b78c9d", 00:30:09.145 "name": "lvs_0", 00:30:09.145 "base_bdev": "Nvme0n1", 00:30:09.145 "total_data_clusters": 476466, 00:30:09.145 "free_clusters": 476466, 00:30:09.145 "block_size": 512, 00:30:09.145 "cluster_size": 4194304 00:30:09.145 } 00:30:09.145 ]' 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="c79257d2-0cd3-423a-98de-f4e434b78c9d") .free_clusters' 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=476466 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="c79257d2-0cd3-423a-98de-f4e434b78c9d") .cluster_size' 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=1905864 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 1905864 00:30:09.145 1905864 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:09.145 21:16:59 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c79257d2-0cd3-423a-98de-f4e434b78c9d lbd_0 20480 00:30:09.404 21:17:00 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8577090e-fc90-41a3-80f0-e473da8e8c89 00:30:09.404 21:17:00 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8577090e-fc90-41a3-80f0-e473da8e8c89 lvs_n_0 00:30:11.328 21:17:02 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2431c3d6-1115-44e2-8206-05df7cdf0576 00:30:11.328 21:17:02 nvmf_rdma.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2431c3d6-1115-44e2-8206-05df7cdf0576 00:30:11.328 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=2431c3d6-1115-44e2-8206-05df7cdf0576 00:30:11.328 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:11.328 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:30:11.328 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:30:11.328 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:11.587 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:11.587 { 00:30:11.587 "uuid": "c79257d2-0cd3-423a-98de-f4e434b78c9d", 00:30:11.587 "name": "lvs_0", 00:30:11.587 "base_bdev": "Nvme0n1", 00:30:11.587 "total_data_clusters": 476466, 00:30:11.587 "free_clusters": 471346, 00:30:11.587 "block_size": 512, 00:30:11.587 "cluster_size": 4194304 00:30:11.587 }, 00:30:11.587 { 00:30:11.587 "uuid": "2431c3d6-1115-44e2-8206-05df7cdf0576", 00:30:11.587 "name": "lvs_n_0", 00:30:11.587 "base_bdev": "8577090e-fc90-41a3-80f0-e473da8e8c89", 00:30:11.587 "total_data_clusters": 5114, 00:30:11.587 "free_clusters": 5114, 00:30:11.587 "block_size": 512, 00:30:11.587 "cluster_size": 4194304 00:30:11.587 } 00:30:11.587 ]' 00:30:11.587 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="2431c3d6-1115-44e2-8206-05df7cdf0576") .free_clusters' 00:30:11.587 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:30:11.587 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="2431c3d6-1115-44e2-8206-05df7cdf0576") .cluster_size' 00:30:11.587 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:11.587 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:30:11.587 21:17:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:30:11.587 20456 00:30:11.587 21:17:02 nvmf_rdma.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:11.587 21:17:02 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2431c3d6-1115-44e2-8206-05df7cdf0576 lbd_nest_0 20456 00:30:11.846 21:17:02 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=03908969-8e4c-4204-99d3-d25ca4603cc5 00:30:11.846 21:17:02 nvmf_rdma.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.105 21:17:02 nvmf_rdma.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:12.105 21:17:02 nvmf_rdma.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 03908969-8e4c-4204-99d3-d25ca4603cc5 00:30:12.105 21:17:02 nvmf_rdma.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:12.365 21:17:03 nvmf_rdma.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:12.365 21:17:03 nvmf_rdma.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:12.365 21:17:03 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:12.365 21:17:03 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:12.365 21:17:03 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:12.365 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.568 Initializing NVMe Controllers 00:30:24.568 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.568 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:24.568 Initialization complete. Launching workers. 00:30:24.568 ======================================================== 00:30:24.568 Latency(us) 00:30:24.568 Device Information : IOPS MiB/s Average min max 00:30:24.568 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5834.47 2.85 170.98 68.77 6012.19 00:30:24.568 ======================================================== 00:30:24.568 Total : 5834.47 2.85 170.98 68.77 6012.19 00:30:24.568 00:30:24.568 21:17:14 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:24.568 21:17:14 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:24.568 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.805 Initializing NVMe Controllers 00:30:36.805 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.805 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:36.805 Initialization complete. Launching workers. 00:30:36.805 ======================================================== 00:30:36.805 Latency(us) 00:30:36.805 Device Information : IOPS MiB/s Average min max 00:30:36.805 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2645.50 330.69 377.34 157.57 8003.42 00:30:36.805 ======================================================== 00:30:36.805 Total : 2645.50 330.69 377.34 157.57 8003.42 00:30:36.805 00:30:36.805 21:17:25 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:36.805 21:17:25 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:36.805 21:17:25 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:36.805 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.822 Initializing NVMe Controllers 00:30:46.822 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.822 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:46.822 Initialization complete. Launching workers. 00:30:46.822 ======================================================== 00:30:46.822 Latency(us) 00:30:46.822 Device Information : IOPS MiB/s Average min max 00:30:46.822 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11206.10 5.47 2855.19 1040.69 7179.54 00:30:46.822 ======================================================== 00:30:46.822 Total : 11206.10 5.47 2855.19 1040.69 7179.54 00:30:46.822 00:30:46.822 21:17:37 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:46.822 21:17:37 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:46.822 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.027 Initializing NVMe Controllers 00:30:59.027 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.027 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.027 Initialization complete. Launching workers. 00:30:59.027 ======================================================== 00:30:59.027 Latency(us) 00:30:59.027 Device Information : IOPS MiB/s Average min max 00:30:59.027 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4007.40 500.92 7989.33 4920.65 15219.74 00:30:59.027 ======================================================== 00:30:59.027 Total : 4007.40 500.92 7989.33 4920.65 15219.74 00:30:59.027 00:30:59.027 21:17:48 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:59.027 21:17:48 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:59.027 21:17:48 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:59.027 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.230 Initializing NVMe Controllers 00:31:11.230 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:11.230 Controller IO queue size 128, less than required. 00:31:11.231 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.231 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:11.231 Initialization complete. Launching workers. 00:31:11.231 ======================================================== 00:31:11.231 Latency(us) 00:31:11.231 Device Information : IOPS MiB/s Average min max 00:31:11.231 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18456.60 9.01 6938.13 2090.58 15807.11 00:31:11.231 ======================================================== 00:31:11.231 Total : 18456.60 9.01 6938.13 2090.58 15807.11 00:31:11.231 00:31:11.231 21:17:59 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:11.231 21:17:59 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:11.231 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.208 Initializing NVMe Controllers 00:31:21.208 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:21.208 Controller IO queue size 128, less than required. 00:31:21.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:21.208 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:21.208 Initialization complete. Launching workers. 00:31:21.208 ======================================================== 00:31:21.208 Latency(us) 00:31:21.208 Device Information : IOPS MiB/s Average min max 00:31:21.208 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10931.10 1366.39 11708.19 3473.86 25341.31 00:31:21.208 ======================================================== 00:31:21.208 Total : 10931.10 1366.39 11708.19 3473.86 25341.31 00:31:21.208 00:31:21.208 21:18:11 nvmf_rdma.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.208 21:18:11 nvmf_rdma.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 03908969-8e4c-4204-99d3-d25ca4603cc5 00:31:21.467 21:18:12 nvmf_rdma.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:21.467 21:18:12 nvmf_rdma.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8577090e-fc90-41a3-80f0-e473da8e8c89 00:31:21.729 21:18:12 nvmf_rdma.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:22.051 rmmod nvme_rdma 00:31:22.051 rmmod nvme_fabrics 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3683764 ']' 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3683764 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3683764 ']' 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3683764 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3683764 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3683764' 00:31:22.051 killing process with pid 3683764 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3683764 00:31:22.051 21:18:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3683764 00:31:24.602 21:18:15 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:24.603 21:18:15 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:24.603 00:31:24.603 real 1m51.449s 00:31:24.603 user 7m1.287s 00:31:24.603 sys 0m7.131s 00:31:24.603 21:18:15 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:24.603 21:18:15 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:24.603 ************************************ 00:31:24.603 END TEST nvmf_perf 00:31:24.603 ************************************ 00:31:24.603 21:18:15 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:31:24.603 21:18:15 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:24.603 21:18:15 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:24.603 21:18:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:24.603 ************************************ 00:31:24.603 START TEST nvmf_fio_host 00:31:24.603 ************************************ 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:31:24.603 * Looking for test storage... 00:31:24.603 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:24.603 21:18:15 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:31.166 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:31.167 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:31.167 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:31.167 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:31.167 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:31.167 21:18:21 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:31.167 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:31.167 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:31.167 altname enp217s0f0np0 00:31:31.167 altname ens818f0np0 00:31:31.167 inet 192.168.100.8/24 scope global mlx_0_0 00:31:31.167 valid_lft forever preferred_lft forever 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:31.167 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:31.427 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:31.427 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:31.427 altname enp217s0f1np1 00:31:31.427 altname ens818f1np1 00:31:31.427 inet 192.168.100.9/24 scope global mlx_0_1 00:31:31.427 valid_lft forever preferred_lft forever 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:31.427 192.168.100.9' 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:31.427 192.168.100.9' 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:31.427 192.168.100.9' 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:31.427 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3704546 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3704546 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3704546 ']' 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:31.428 21:18:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.428 [2024-07-13 21:18:22.208922] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:31.428 [2024-07-13 21:18:22.208970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.428 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.428 [2024-07-13 21:18:22.279312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.687 [2024-07-13 21:18:22.319760] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.687 [2024-07-13 21:18:22.319802] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.687 [2024-07-13 21:18:22.319812] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.687 [2024-07-13 21:18:22.319821] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.687 [2024-07-13 21:18:22.319828] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.687 [2024-07-13 21:18:22.323034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.687 [2024-07-13 21:18:22.323055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:31.687 [2024-07-13 21:18:22.323137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:31.687 [2024-07-13 21:18:22.323138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.256 21:18:23 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:32.256 21:18:23 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:31:32.256 21:18:23 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:32.513 [2024-07-13 21:18:23.200167] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1937c80/0x193c170) succeed. 00:31:32.513 [2024-07-13 21:18:23.210703] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19392c0/0x197d800) succeed. 00:31:32.513 21:18:23 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:32.513 21:18:23 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:32.513 21:18:23 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.513 21:18:23 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:32.771 Malloc1 00:31:32.771 21:18:23 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:33.028 21:18:23 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:33.286 21:18:23 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:33.286 [2024-07-13 21:18:24.109047] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:33.286 21:18:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:33.545 21:18:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:33.804 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:33.804 fio-3.35 00:31:33.804 Starting 1 thread 00:31:34.063 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.595 00:31:36.595 test: (groupid=0, jobs=1): err= 0: pid=3705219: Sat Jul 13 21:18:26 2024 00:31:36.595 read: IOPS=17.4k, BW=68.2MiB/s (71.5MB/s)(137MiB/2004msec) 00:31:36.595 slat (nsec): min=1371, max=36288, avg=1513.28, stdev=488.63 00:31:36.595 clat (usec): min=2181, max=6609, avg=3640.36, stdev=89.76 00:31:36.595 lat (usec): min=2203, max=6611, avg=3641.88, stdev=89.68 00:31:36.595 clat percentiles (usec): 00:31:36.595 | 1.00th=[ 3589], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:31:36.595 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3654], 00:31:36.595 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3654], 00:31:36.595 | 99.00th=[ 3687], 99.50th=[ 3884], 99.90th=[ 5145], 99.95th=[ 5669], 00:31:36.595 | 99.99th=[ 6587] 00:31:36.595 bw ( KiB/s): min=68248, max=70416, per=100.00%, avg=69794.00, stdev=1040.09, samples=4 00:31:36.595 iops : min=17062, max=17604, avg=17448.50, stdev=260.02, samples=4 00:31:36.595 write: IOPS=17.5k, BW=68.2MiB/s (71.5MB/s)(137MiB/2004msec); 0 zone resets 00:31:36.596 slat (nsec): min=1432, max=20114, avg=1636.63, stdev=509.89 00:31:36.596 clat (usec): min=2215, max=6621, avg=3638.27, stdev=84.30 00:31:36.596 lat (usec): min=2226, max=6623, avg=3639.91, stdev=84.22 00:31:36.596 clat percentiles (usec): 00:31:36.596 | 1.00th=[ 3589], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:31:36.596 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3654], 00:31:36.596 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3654], 00:31:36.596 | 99.00th=[ 3687], 99.50th=[ 3752], 99.90th=[ 4359], 99.95th=[ 5669], 00:31:36.596 | 99.99th=[ 6587] 00:31:36.596 bw ( KiB/s): min=68440, max=70560, per=100.00%, avg=69884.00, stdev=979.15, samples=4 00:31:36.596 iops : min=17110, max=17640, avg=17471.00, stdev=244.79, samples=4 00:31:36.596 lat (msec) : 4=99.74%, 10=0.26% 00:31:36.596 cpu : usr=99.50%, sys=0.05%, ctx=22, majf=0, minf=3 00:31:36.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:36.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.596 issued rwts: total=34965,35001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.596 00:31:36.596 Run status group 0 (all jobs): 00:31:36.596 READ: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=137MiB (143MB), run=2004-2004msec 00:31:36.596 WRITE: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=137MiB (143MB), run=2004-2004msec 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:36.596 21:18:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:31:36.596 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:36.596 fio-3.35 00:31:36.596 Starting 1 thread 00:31:36.596 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.139 00:31:39.139 test: (groupid=0, jobs=1): err= 0: pid=3705826: Sat Jul 13 21:18:29 2024 00:31:39.139 read: IOPS=14.2k, BW=222MiB/s (233MB/s)(437MiB/1963msec) 00:31:39.139 slat (nsec): min=2250, max=47532, avg=2603.10, stdev=1018.86 00:31:39.139 clat (usec): min=475, max=8906, avg=1654.92, stdev=1331.57 00:31:39.139 lat (usec): min=477, max=8939, avg=1657.52, stdev=1331.94 00:31:39.139 clat percentiles (usec): 00:31:39.139 | 1.00th=[ 693], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 930], 00:31:39.139 | 30.00th=[ 996], 40.00th=[ 1074], 50.00th=[ 1172], 60.00th=[ 1287], 00:31:39.139 | 70.00th=[ 1434], 80.00th=[ 1614], 90.00th=[ 4555], 95.00th=[ 4948], 00:31:39.139 | 99.00th=[ 6390], 99.50th=[ 6915], 99.90th=[ 7570], 99.95th=[ 7963], 00:31:39.139 | 99.99th=[ 8848] 00:31:39.139 bw ( KiB/s): min=109024, max=113056, per=48.59%, avg=110680.00, stdev=1706.21, samples=4 00:31:39.139 iops : min= 6814, max= 7066, avg=6917.50, stdev=106.64, samples=4 00:31:39.139 write: IOPS=8064, BW=126MiB/s (132MB/s)(225MiB/1787msec); 0 zone resets 00:31:39.139 slat (usec): min=26, max=128, avg=29.10, stdev= 5.99 00:31:39.139 clat (usec): min=4377, max=20020, avg=12867.64, stdev=1841.06 00:31:39.139 lat (usec): min=4406, max=20047, avg=12896.74, stdev=1840.62 00:31:39.139 clat percentiles (usec): 00:31:39.139 | 1.00th=[ 7635], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:31:39.139 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:31:39.139 | 70.00th=[13829], 80.00th=[14222], 90.00th=[15008], 95.00th=[15795], 00:31:39.139 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19268], 99.95th=[19530], 00:31:39.139 | 99.99th=[20055] 00:31:39.139 bw ( KiB/s): min=111104, max=116992, per=88.74%, avg=114512.00, stdev=2473.95, samples=4 00:31:39.139 iops : min= 6944, max= 7312, avg=7157.00, stdev=154.62, samples=4 00:31:39.139 lat (usec) : 500=0.01%, 750=1.74%, 1000=18.24% 00:31:39.139 lat (msec) : 2=36.85%, 4=2.10%, 10=8.55%, 20=32.51%, 50=0.01% 00:31:39.139 cpu : usr=95.96%, sys=2.39%, ctx=183, majf=0, minf=2 00:31:39.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:39.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:39.139 issued rwts: total=27944,14412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.139 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:39.139 00:31:39.139 Run status group 0 (all jobs): 00:31:39.139 READ: bw=222MiB/s (233MB/s), 222MiB/s-222MiB/s (233MB/s-233MB/s), io=437MiB (458MB), run=1963-1963msec 00:31:39.139 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=225MiB (236MB), run=1787-1787msec 00:31:39.139 21:18:29 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.139 21:18:29 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:39.139 21:18:29 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:39.139 21:18:29 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:39.139 21:18:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:39.139 21:18:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:31:39.139 21:18:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:39.139 21:18:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:39.139 21:18:29 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:31:39.139 21:18:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:31:39.139 21:18:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:31:39.139 21:18:30 nvmf_rdma.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:31:42.449 Nvme0n1 00:31:42.449 21:18:33 nvmf_rdma.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:47.734 21:18:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=02ca66be-5a11-4984-8eb3-2ccd92837019 00:31:47.734 21:18:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 02ca66be-5a11-4984-8eb3-2ccd92837019 00:31:47.734 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=02ca66be-5a11-4984-8eb3-2ccd92837019 00:31:47.734 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:31:47.734 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:31:47.734 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:31:47.734 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:47.994 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:31:47.994 { 00:31:47.994 "uuid": "02ca66be-5a11-4984-8eb3-2ccd92837019", 00:31:47.994 "name": "lvs_0", 00:31:47.994 "base_bdev": "Nvme0n1", 00:31:47.994 "total_data_clusters": 1862, 00:31:47.994 "free_clusters": 1862, 00:31:47.994 "block_size": 512, 00:31:47.994 "cluster_size": 1073741824 00:31:47.994 } 00:31:47.994 ]' 00:31:47.994 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="02ca66be-5a11-4984-8eb3-2ccd92837019") .free_clusters' 00:31:47.994 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1862 00:31:47.994 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="02ca66be-5a11-4984-8eb3-2ccd92837019") .cluster_size' 00:31:47.994 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:31:47.994 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1906688 00:31:47.994 21:18:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1906688 00:31:47.994 1906688 00:31:47.994 21:18:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:31:48.625 fcaf4b05-1edb-44fa-9cb4-38a241a1c408 00:31:48.625 21:18:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:48.625 21:18:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:48.884 21:18:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:49.142 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:49.143 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:49.143 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:49.143 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:49.143 21:18:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:49.401 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:49.401 fio-3.35 00:31:49.401 Starting 1 thread 00:31:49.401 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.937 00:31:51.937 test: (groupid=0, jobs=1): err= 0: pid=3708024: Sat Jul 13 21:18:42 2024 00:31:51.937 read: IOPS=9509, BW=37.1MiB/s (39.0MB/s)(74.5MiB/2005msec) 00:31:51.937 slat (nsec): min=1480, max=27368, avg=1563.38, stdev=240.56 00:31:51.937 clat (usec): min=204, max=364235, avg=6689.11, stdev=20877.83 00:31:51.937 lat (usec): min=205, max=364238, avg=6690.67, stdev=20877.86 00:31:51.937 clat percentiles (msec): 00:31:51.937 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:31:51.937 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:31:51.937 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:31:51.937 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 363], 99.95th=[ 363], 00:31:51.937 | 99.99th=[ 363] 00:31:51.937 bw ( KiB/s): min=11816, max=46856, per=99.94%, avg=38016.00, stdev=17466.98, samples=4 00:31:51.937 iops : min= 2954, max=11714, avg=9504.00, stdev=4366.74, samples=4 00:31:51.937 write: IOPS=9518, BW=37.2MiB/s (39.0MB/s)(74.6MiB/2005msec); 0 zone resets 00:31:51.937 slat (nsec): min=1530, max=6508, avg=1788.20, stdev=225.62 00:31:51.937 clat (usec): min=175, max=364569, avg=6648.21, stdev=20293.43 00:31:51.937 lat (usec): min=176, max=364572, avg=6650.00, stdev=20293.47 00:31:51.937 clat percentiles (msec): 00:31:51.937 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:31:51.937 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:31:51.937 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:31:51.937 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 363], 99.95th=[ 363], 00:31:51.937 | 99.99th=[ 363] 00:31:51.937 bw ( KiB/s): min=12360, max=46648, per=99.89%, avg=38034.00, stdev=17116.05, samples=4 00:31:51.937 iops : min= 3090, max=11662, avg=9508.50, stdev=4279.01, samples=4 00:31:51.937 lat (usec) : 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.02% 00:31:51.937 lat (msec) : 2=0.04%, 4=0.32%, 10=99.24%, 20=0.01%, 500=0.34% 00:31:51.937 cpu : usr=99.55%, sys=0.10%, ctx=16, majf=0, minf=12 00:31:51.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:51.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.937 issued rwts: total=19067,19085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.937 00:31:51.937 Run status group 0 (all jobs): 00:31:51.937 READ: bw=37.1MiB/s (39.0MB/s), 37.1MiB/s-37.1MiB/s (39.0MB/s-39.0MB/s), io=74.5MiB (78.1MB), run=2005-2005msec 00:31:51.937 WRITE: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.6MiB (78.2MB), run=2005-2005msec 00:31:51.937 21:18:42 nvmf_rdma.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:51.937 21:18:42 nvmf_rdma.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:53.314 21:18:43 nvmf_rdma.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=5587f4a7-52d7-4b03-83ce-56435098da89 00:31:53.314 21:18:43 nvmf_rdma.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 5587f4a7-52d7-4b03-83ce-56435098da89 00:31:53.314 21:18:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=5587f4a7-52d7-4b03-83ce-56435098da89 00:31:53.314 21:18:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:31:53.314 21:18:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:31:53.314 21:18:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:31:53.314 21:18:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:53.314 21:18:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:31:53.314 { 00:31:53.314 "uuid": "02ca66be-5a11-4984-8eb3-2ccd92837019", 00:31:53.314 "name": "lvs_0", 00:31:53.314 "base_bdev": "Nvme0n1", 00:31:53.314 "total_data_clusters": 1862, 00:31:53.314 "free_clusters": 0, 00:31:53.314 "block_size": 512, 00:31:53.314 "cluster_size": 1073741824 00:31:53.314 }, 00:31:53.314 { 00:31:53.314 "uuid": "5587f4a7-52d7-4b03-83ce-56435098da89", 00:31:53.314 "name": "lvs_n_0", 00:31:53.314 "base_bdev": "fcaf4b05-1edb-44fa-9cb4-38a241a1c408", 00:31:53.314 "total_data_clusters": 476206, 00:31:53.314 "free_clusters": 476206, 00:31:53.314 "block_size": 512, 00:31:53.314 "cluster_size": 4194304 00:31:53.314 } 00:31:53.314 ]' 00:31:53.314 21:18:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="5587f4a7-52d7-4b03-83ce-56435098da89") .free_clusters' 00:31:53.314 21:18:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=476206 00:31:53.314 21:18:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="5587f4a7-52d7-4b03-83ce-56435098da89") .cluster_size' 00:31:53.573 21:18:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:31:53.573 21:18:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1904824 00:31:53.573 21:18:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1904824 00:31:53.573 1904824 00:31:53.573 21:18:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:31:54.509 8bb75f7b-802a-4d95-acd7-166ffef2f443 00:31:54.509 21:18:45 nvmf_rdma.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:54.509 21:18:45 nvmf_rdma.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:54.767 21:18:45 nvmf_rdma.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:55.064 21:18:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:55.329 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:55.329 fio-3.35 00:31:55.329 Starting 1 thread 00:31:55.329 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.866 00:31:57.866 test: (groupid=0, jobs=1): err= 0: pid=3709105: Sat Jul 13 21:18:48 2024 00:31:57.866 read: IOPS=9801, BW=38.3MiB/s (40.1MB/s)(76.8MiB/2005msec) 00:31:57.866 slat (nsec): min=1364, max=26786, avg=1495.41, stdev=318.78 00:31:57.866 clat (usec): min=2863, max=10908, avg=6439.29, stdev=185.64 00:31:57.866 lat (usec): min=2867, max=10910, avg=6440.78, stdev=185.58 00:31:57.866 clat percentiles (usec): 00:31:57.866 | 1.00th=[ 6325], 5.00th=[ 6390], 10.00th=[ 6390], 20.00th=[ 6390], 00:31:57.866 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6456], 60.00th=[ 6456], 00:31:57.866 | 70.00th=[ 6456], 80.00th=[ 6456], 90.00th=[ 6456], 95.00th=[ 6521], 00:31:57.866 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 9241], 99.95th=[10814], 00:31:57.866 | 99.99th=[10945] 00:31:57.866 bw ( KiB/s): min=37480, max=40080, per=99.94%, avg=39184.00, stdev=1200.23, samples=4 00:31:57.866 iops : min= 9370, max=10020, avg=9796.00, stdev=300.06, samples=4 00:31:57.866 write: IOPS=9816, BW=38.3MiB/s (40.2MB/s)(76.9MiB/2005msec); 0 zone resets 00:31:57.866 slat (nsec): min=1417, max=17699, avg=1628.09, stdev=318.39 00:31:57.866 clat (usec): min=2858, max=10896, avg=6461.07, stdev=183.68 00:31:57.866 lat (usec): min=2865, max=10897, avg=6462.70, stdev=183.62 00:31:57.866 clat percentiles (usec): 00:31:57.866 | 1.00th=[ 6390], 5.00th=[ 6390], 10.00th=[ 6390], 20.00th=[ 6456], 00:31:57.866 | 30.00th=[ 6456], 40.00th=[ 6456], 50.00th=[ 6456], 60.00th=[ 6456], 00:31:57.866 | 70.00th=[ 6456], 80.00th=[ 6456], 90.00th=[ 6521], 95.00th=[ 6521], 00:31:57.866 | 99.00th=[ 6587], 99.50th=[ 6652], 99.90th=[ 9241], 99.95th=[10421], 00:31:57.866 | 99.99th=[10814] 00:31:57.866 bw ( KiB/s): min=38088, max=39968, per=99.91%, avg=39230.00, stdev=811.71, samples=4 00:31:57.866 iops : min= 9522, max= 9992, avg=9807.50, stdev=202.93, samples=4 00:31:57.866 lat (msec) : 4=0.02%, 10=99.91%, 20=0.08% 00:31:57.866 cpu : usr=99.50%, sys=0.20%, ctx=15, majf=0, minf=12 00:31:57.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:57.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:57.866 issued rwts: total=19652,19682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:57.866 00:31:57.866 Run status group 0 (all jobs): 00:31:57.866 READ: bw=38.3MiB/s (40.1MB/s), 38.3MiB/s-38.3MiB/s (40.1MB/s-40.1MB/s), io=76.8MiB (80.5MB), run=2005-2005msec 00:31:57.866 WRITE: bw=38.3MiB/s (40.2MB/s), 38.3MiB/s-38.3MiB/s (40.2MB/s-40.2MB/s), io=76.9MiB (80.6MB), run=2005-2005msec 00:31:57.866 21:18:48 nvmf_rdma.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:57.866 21:18:48 nvmf_rdma.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:57.866 21:18:48 nvmf_rdma.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:05.990 21:18:55 nvmf_rdma.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:05.990 21:18:55 nvmf_rdma.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:11.257 21:19:01 nvmf_rdma.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:11.257 21:19:01 nvmf_rdma.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:14.542 rmmod nvme_rdma 00:32:14.542 rmmod nvme_fabrics 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3704546 ']' 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3704546 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3704546 ']' 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3704546 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:14.542 21:19:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3704546 00:32:14.542 21:19:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:14.542 21:19:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:14.542 21:19:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3704546' 00:32:14.542 killing process with pid 3704546 00:32:14.542 21:19:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3704546 00:32:14.542 21:19:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3704546 00:32:14.542 21:19:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:14.542 21:19:05 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:14.542 00:32:14.542 real 0m49.979s 00:32:14.542 user 3m37.863s 00:32:14.542 sys 0m7.812s 00:32:14.542 21:19:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:14.542 21:19:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.542 ************************************ 00:32:14.542 END TEST nvmf_fio_host 00:32:14.542 ************************************ 00:32:14.542 21:19:05 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:32:14.542 21:19:05 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:14.542 21:19:05 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:14.542 21:19:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:14.542 ************************************ 00:32:14.542 START TEST nvmf_failover 00:32:14.542 ************************************ 00:32:14.542 21:19:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:32:14.801 * Looking for test storage... 00:32:14.801 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:32:14.801 21:19:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:21.398 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:21.398 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:21.398 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:21.398 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:32:21.398 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:21.399 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:32:21.399 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:21.399 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:21.399 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:21.399 21:19:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:32:21.399 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:21.399 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:21.399 altname enp217s0f0np0 00:32:21.399 altname ens818f0np0 00:32:21.399 inet 192.168.100.8/24 scope global mlx_0_0 00:32:21.399 valid_lft forever preferred_lft forever 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:32:21.399 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:21.399 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:21.399 altname enp217s0f1np1 00:32:21.399 altname ens818f1np1 00:32:21.399 inet 192.168.100.9/24 scope global mlx_0_1 00:32:21.399 valid_lft forever preferred_lft forever 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:21.399 192.168.100.9' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:21.399 192.168.100.9' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:21.399 192.168.100.9' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3715456 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3715456 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3715456 ']' 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:21.399 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.399 [2024-07-13 21:19:12.261168] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:21.399 [2024-07-13 21:19:12.261220] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.658 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.658 [2024-07-13 21:19:12.331400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:21.658 [2024-07-13 21:19:12.370257] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.658 [2024-07-13 21:19:12.370300] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.658 [2024-07-13 21:19:12.370310] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.658 [2024-07-13 21:19:12.370318] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.658 [2024-07-13 21:19:12.370325] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.658 [2024-07-13 21:19:12.370428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:21.658 [2024-07-13 21:19:12.370516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:21.658 [2024-07-13 21:19:12.370518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.658 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:21.658 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:32:21.658 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:21.658 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.658 21:19:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.658 21:19:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.658 21:19:12 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:21.918 [2024-07-13 21:19:12.678131] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e84420/0x1e88910) succeed. 00:32:21.918 [2024-07-13 21:19:12.688343] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e859c0/0x1ec9fa0) succeed. 00:32:22.176 21:19:12 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:22.176 Malloc0 00:32:22.176 21:19:13 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:22.445 21:19:13 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:22.711 21:19:13 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:22.711 [2024-07-13 21:19:13.537014] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:22.711 21:19:13 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:32:22.969 [2024-07-13 21:19:13.709368] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:32:22.969 21:19:13 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:32:23.228 [2024-07-13 21:19:13.878015] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3715748 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3715748 /var/tmp/bdevperf.sock 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3715748 ']' 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:23.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:23.228 21:19:13 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:23.487 21:19:14 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:23.487 21:19:14 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:32:23.487 21:19:14 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:23.487 NVMe0n1 00:32:23.745 21:19:14 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:23.745 00:32:24.004 21:19:14 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:24.004 21:19:14 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3715971 00:32:24.004 21:19:14 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:24.942 21:19:15 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:25.200 21:19:15 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:28.491 21:19:18 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:28.491 00:32:28.491 21:19:19 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:32:28.491 21:19:19 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:31.780 21:19:22 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:31.780 [2024-07-13 21:19:22.439749] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:31.780 21:19:22 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:32.716 21:19:23 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:32:32.975 21:19:23 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 3715971 00:32:39.547 0 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 3715748 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3715748 ']' 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3715748 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3715748 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3715748' 00:32:39.547 killing process with pid 3715748 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3715748 00:32:39.547 21:19:29 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3715748 00:32:39.547 21:19:30 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:39.547 [2024-07-13 21:19:13.938679] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:39.547 [2024-07-13 21:19:13.938748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715748 ] 00:32:39.547 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.547 [2024-07-13 21:19:14.009648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.547 [2024-07-13 21:19:14.048588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.547 Running I/O for 15 seconds... 00:32:39.547 [2024-07-13 21:19:16.818100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.547 [2024-07-13 21:19:16.818149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.547 [2024-07-13 21:19:16.818169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.547 [2024-07-13 21:19:16.818179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.547 [2024-07-13 21:19:16.818191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.818982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.818991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.819002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.819016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.819027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.548 [2024-07-13 21:19:16.819036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.548 [2024-07-13 21:19:16.819046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.549 [2024-07-13 21:19:16.819411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.549 [2024-07-13 21:19:16.819847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182f00 00:32:39.549 [2024-07-13 21:19:16.819856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.819867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.819876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.819887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.819896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.819907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.819916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.819927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.819936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.819946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.819956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.819967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.819976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.819987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.819996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182f00 00:32:39.550 [2024-07-13 21:19:16.820645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.550 [2024-07-13 21:19:16.820656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:16.820666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:16.820676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:16.820685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:16.820695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:16.820704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:16.822594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:39.551 [2024-07-13 21:19:16.822609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:39.551 [2024-07-13 21:19:16.822618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27136 len:8 PRP1 0x0 PRP2 0x0 00:32:39.551 [2024-07-13 21:19:16.822629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:16.822671] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:32:39.551 [2024-07-13 21:19:16.822682] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:32:39.551 [2024-07-13 21:19:16.822694] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:39.551 [2024-07-13 21:19:16.825426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:39.551 [2024-07-13 21:19:16.839861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:39.551 [2024-07-13 21:19:16.884985] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:39.551 [2024-07-13 21:19:20.257129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182f00 00:32:39.551 [2024-07-13 21:19:20.257594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-07-13 21:19:20.257616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-07-13 21:19:20.257636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-07-13 21:19:20.257655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-07-13 21:19:20.257675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-07-13 21:19:20.257695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.551 [2024-07-13 21:19:20.257714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.551 [2024-07-13 21:19:20.257725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.257735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.257754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.257774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.257794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.257813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.257834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.257855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.257875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.257894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.257915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.257934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.257953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.257973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.257984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.257993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.552 [2024-07-13 21:19:20.258337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.258358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.258378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.258398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.258418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.258438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.258458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.258478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.258498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.552 [2024-07-13 21:19:20.258511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182f00 00:32:39.552 [2024-07-13 21:19:20.258520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.258984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.258994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-07-13 21:19:20.259017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-07-13 21:19:20.259037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.553 [2024-07-13 21:19:20.259059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182f00 00:32:39.553 [2024-07-13 21:19:20.259321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.553 [2024-07-13 21:19:20.259331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182f00 00:32:39.554 [2024-07-13 21:19:20.259341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182f00 00:32:39.554 [2024-07-13 21:19:20.259361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182f00 00:32:39.554 [2024-07-13 21:19:20.259381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.259753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:20.259763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.261744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:39.554 [2024-07-13 21:19:20.261760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:39.554 [2024-07-13 21:19:20.261769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114536 len:8 PRP1 0x0 PRP2 0x0 00:32:39.554 [2024-07-13 21:19:20.261779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:20.261820] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:32:39.554 [2024-07-13 21:19:20.261832] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:32:39.554 [2024-07-13 21:19:20.261844] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:39.554 [2024-07-13 21:19:20.264589] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:39.554 [2024-07-13 21:19:20.279054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:39.554 [2024-07-13 21:19:20.323619] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:39.554 [2024-07-13 21:19:24.634741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.634990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.634999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.635014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.635024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.635036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.635045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.635055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.635064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.635076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.635086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.635096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.554 [2024-07-13 21:19:24.635105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.554 [2024-07-13 21:19:24.635116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.555 [2024-07-13 21:19:24.635720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182f00 00:32:39.555 [2024-07-13 21:19:24.635942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.555 [2024-07-13 21:19:24.635953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.635962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.635972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.635981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.635992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.636001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.636064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.636085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.636105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.636125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.636145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.636167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.636186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182f00 00:32:39.556 [2024-07-13 21:19:24.636207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.556 [2024-07-13 21:19:24.636656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.556 [2024-07-13 21:19:24.636665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182f00 00:32:39.557 [2024-07-13 21:19:24.636867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182f00 00:32:39.557 [2024-07-13 21:19:24.636888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182f00 00:32:39.557 [2024-07-13 21:19:24.636910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.636988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.636999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:39.557 [2024-07-13 21:19:24.637233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182f00 00:32:39.557 [2024-07-13 21:19:24.637253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182f00 00:32:39.557 [2024-07-13 21:19:24.637272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182f00 00:32:39.557 [2024-07-13 21:19:24.637293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182f00 00:32:39.557 [2024-07-13 21:19:24.637313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182f00 00:32:39.557 [2024-07-13 21:19:24.637333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.637343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182f00 00:32:39.557 [2024-07-13 21:19:24.637353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7d846000 sqhd:52d0 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.639289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:39.557 [2024-07-13 21:19:24.639303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:39.557 [2024-07-13 21:19:24.639312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81968 len:8 PRP1 0x0 PRP2 0x0 00:32:39.557 [2024-07-13 21:19:24.639324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.557 [2024-07-13 21:19:24.639362] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:32:39.557 [2024-07-13 21:19:24.639373] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:32:39.557 [2024-07-13 21:19:24.639385] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:39.557 [2024-07-13 21:19:24.642096] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:39.557 [2024-07-13 21:19:24.656123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:39.557 [2024-07-13 21:19:24.697408] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:39.557 00:32:39.557 Latency(us) 00:32:39.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.557 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:39.557 Verification LBA range: start 0x0 length 0x4000 00:32:39.557 NVMe0n1 : 15.01 14289.95 55.82 307.42 0.00 8747.98 316.21 1020054.73 00:32:39.557 =================================================================================================================== 00:32:39.557 Total : 14289.95 55.82 307.42 0.00 8747.98 316.21 1020054.73 00:32:39.557 Received shutdown signal, test time was about 15.000000 seconds 00:32:39.557 00:32:39.557 Latency(us) 00:32:39.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.558 =================================================================================================================== 00:32:39.558 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3718410 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3718410 /var/tmp/bdevperf.sock 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3718410 ']' 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:39.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:39.558 21:19:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:40.126 21:19:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:40.127 21:19:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:32:40.127 21:19:30 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:32:40.385 [2024-07-13 21:19:31.051039] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:32:40.385 21:19:31 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:32:40.385 [2024-07-13 21:19:31.231692] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:32:40.385 21:19:31 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:40.643 NVMe0n1 00:32:40.644 21:19:31 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:40.902 00:32:40.902 21:19:31 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:41.161 00:32:41.161 21:19:31 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:41.161 21:19:31 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:41.420 21:19:32 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:41.715 21:19:32 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:45.031 21:19:35 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:45.031 21:19:35 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:45.031 21:19:35 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3719368 00:32:45.031 21:19:35 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:45.031 21:19:35 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 3719368 00:32:45.967 0 00:32:45.967 21:19:36 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:45.967 [2024-07-13 21:19:30.086944] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:45.967 [2024-07-13 21:19:30.086998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718410 ] 00:32:45.967 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.967 [2024-07-13 21:19:30.158895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.967 [2024-07-13 21:19:30.197591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.967 [2024-07-13 21:19:32.330005] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:32:45.967 [2024-07-13 21:19:32.330679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.967 [2024-07-13 21:19:32.330710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.967 [2024-07-13 21:19:32.346072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:45.967 [2024-07-13 21:19:32.362238] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:45.967 Running I/O for 1 seconds... 00:32:45.967 00:32:45.967 Latency(us) 00:32:45.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.967 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:45.967 Verification LBA range: start 0x0 length 0x4000 00:32:45.967 NVMe0n1 : 1.01 17953.58 70.13 0.00 0.00 7091.28 2818.05 15204.35 00:32:45.967 =================================================================================================================== 00:32:45.967 Total : 17953.58 70.13 0.00 0.00 7091.28 2818.05 15204.35 00:32:45.967 21:19:36 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:45.967 21:19:36 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:46.225 21:19:36 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:46.225 21:19:37 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:46.225 21:19:37 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:46.484 21:19:37 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:46.742 21:19:37 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 3718410 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3718410 ']' 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3718410 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3718410 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3718410' 00:32:50.027 killing process with pid 3718410 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3718410 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3718410 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:50.027 21:19:40 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:50.286 rmmod nvme_rdma 00:32:50.286 rmmod nvme_fabrics 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3715456 ']' 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3715456 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3715456 ']' 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3715456 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3715456 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3715456' 00:32:50.286 killing process with pid 3715456 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3715456 00:32:50.286 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3715456 00:32:50.545 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:50.545 21:19:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:50.545 00:32:50.545 real 0m36.009s 00:32:50.545 user 1m59.261s 00:32:50.545 sys 0m7.368s 00:32:50.545 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:50.545 21:19:41 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:50.545 ************************************ 00:32:50.545 END TEST nvmf_failover 00:32:50.545 ************************************ 00:32:50.804 21:19:41 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:32:50.804 21:19:41 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:50.804 21:19:41 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:50.804 21:19:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:50.804 ************************************ 00:32:50.804 START TEST nvmf_host_discovery 00:32:50.804 ************************************ 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:32:50.804 * Looking for test storage... 00:32:50.804 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.804 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:50.805 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:50.805 21:19:41 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:50.805 21:19:41 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:32:50.805 21:19:41 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:32:50.805 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:32:50.805 21:19:41 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:32:50.805 00:32:50.805 real 0m0.145s 00:32:50.805 user 0m0.068s 00:32:50.805 sys 0m0.087s 00:32:50.805 21:19:41 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:50.805 21:19:41 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.805 ************************************ 00:32:50.805 END TEST nvmf_host_discovery 00:32:50.805 ************************************ 00:32:50.805 21:19:41 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:32:50.805 21:19:41 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:50.805 21:19:41 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:50.805 21:19:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:51.063 ************************************ 00:32:51.063 START TEST nvmf_host_multipath_status 00:32:51.063 ************************************ 00:32:51.063 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:32:51.063 * Looking for test storage... 00:32:51.063 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:51.063 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.063 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:51.063 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.063 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:51.064 21:19:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:57.638 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:57.638 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:57.638 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:57.638 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:32:57.638 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:32:57.639 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:57.639 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:57.639 altname enp217s0f0np0 00:32:57.639 altname ens818f0np0 00:32:57.639 inet 192.168.100.8/24 scope global mlx_0_0 00:32:57.639 valid_lft forever preferred_lft forever 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:32:57.639 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:57.639 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:57.639 altname enp217s0f1np1 00:32:57.639 altname ens818f1np1 00:32:57.639 inet 192.168.100.9/24 scope global mlx_0_1 00:32:57.639 valid_lft forever preferred_lft forever 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:57.639 192.168.100.9' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:57.639 192.168.100.9' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:57.639 192.168.100.9' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3723537 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3723537 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3723537 ']' 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:57.639 21:19:48 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:57.639 [2024-07-13 21:19:48.422067] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:57.639 [2024-07-13 21:19:48.422121] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.639 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.639 [2024-07-13 21:19:48.492652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:57.899 [2024-07-13 21:19:48.532006] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.899 [2024-07-13 21:19:48.532049] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.899 [2024-07-13 21:19:48.532059] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.899 [2024-07-13 21:19:48.532068] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.899 [2024-07-13 21:19:48.532075] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.899 [2024-07-13 21:19:48.534032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.899 [2024-07-13 21:19:48.534036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.465 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:58.465 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:32:58.465 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:58.465 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.465 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:58.465 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.465 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3723537 00:32:58.465 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:58.724 [2024-07-13 21:19:49.454480] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2476630/0x247ab20) succeed. 00:32:58.724 [2024-07-13 21:19:49.463435] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2477b30/0x24bc1b0) succeed. 00:32:58.724 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:58.983 Malloc0 00:32:58.983 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:59.242 21:19:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:59.242 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:59.501 [2024-07-13 21:19:50.220481] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:59.501 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:32:59.760 [2024-07-13 21:19:50.396759] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:32:59.760 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3724011 00:32:59.760 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:59.760 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:59.760 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3724011 /var/tmp/bdevperf.sock 00:32:59.761 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3724011 ']' 00:32:59.761 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:59.761 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:59.761 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:59.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:59.761 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:59.761 21:19:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:00.696 21:19:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:00.696 21:19:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:33:00.696 21:19:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:00.697 21:19:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:00.955 Nvme0n1 00:33:00.955 21:19:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:01.214 Nvme0n1 00:33:01.214 21:19:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:01.214 21:19:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:03.120 21:19:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:03.120 21:19:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:33:03.379 21:19:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:03.379 21:19:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.756 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.016 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.016 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.016 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.016 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.316 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.316 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.316 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.316 21:19:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.316 21:19:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.316 21:19:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:05.316 21:19:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.316 21:19:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.576 21:19:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.576 21:19:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:05.576 21:19:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:05.834 21:19:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:05.834 21:19:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:07.213 21:19:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:07.213 21:19:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:07.213 21:19:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.213 21:19:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.213 21:19:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.213 21:19:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:07.213 21:19:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.213 21:19:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.213 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.213 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.213 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.213 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.473 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.473 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.473 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.473 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:07.733 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.733 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:07.733 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.733 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:07.733 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.733 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:07.733 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:07.733 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.992 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.992 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:07.992 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:08.252 21:19:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:33:08.252 21:19:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.631 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:09.890 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.890 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:09.890 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.890 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:10.150 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.150 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:10.150 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:10.150 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.150 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.150 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:10.150 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.150 21:20:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:10.409 21:20:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.409 21:20:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:10.409 21:20:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:10.667 21:20:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:10.667 21:20:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.043 21:20:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:12.302 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.302 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:12.302 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:12.302 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.564 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.564 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:12.564 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.564 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:12.564 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.564 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:12.564 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.564 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:12.822 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.822 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:12.822 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:33:13.080 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:13.080 21:20:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:14.458 21:20:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:14.458 21:20:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:14.458 21:20:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.458 21:20:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:14.458 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.458 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:14.458 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:14.458 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.458 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.458 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:14.458 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.458 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:14.716 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.717 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:14.717 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.717 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.975 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.975 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:14.975 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.975 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.975 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.975 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:14.975 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.975 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:15.233 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:15.233 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:15.233 21:20:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:33:15.492 21:20:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:15.492 21:20:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.868 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:17.128 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.128 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:17.128 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.128 21:20:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:17.388 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.388 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:17.388 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.388 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:17.388 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:17.388 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:17.388 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.388 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:17.648 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.648 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:17.907 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:17.907 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:33:17.907 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:18.166 21:20:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:19.103 21:20:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:19.103 21:20:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:19.103 21:20:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.103 21:20:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:19.362 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.362 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:19.362 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.362 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:19.622 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.622 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:19.622 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.622 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:19.622 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.622 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:19.622 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.622 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.881 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.881 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:19.881 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.881 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:20.141 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.141 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:20.141 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.141 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:20.141 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.141 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:20.141 21:20:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:20.399 21:20:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:20.658 21:20:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:21.594 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:21.594 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:21.594 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.594 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.853 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.853 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:21.853 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.853 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.853 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.853 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.853 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.853 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:22.112 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.112 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:22.112 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.112 21:20:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:22.433 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.433 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:22.433 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.433 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:22.433 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.433 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:22.433 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.433 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:22.692 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.692 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:22.692 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:22.692 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:33:22.950 21:20:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:23.887 21:20:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:23.887 21:20:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:23.887 21:20:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.887 21:20:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:24.146 21:20:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.146 21:20:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:24.146 21:20:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.146 21:20:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:24.405 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.405 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:24.405 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.405 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:24.405 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.405 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:24.405 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.405 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:24.663 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.663 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:24.663 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.664 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:24.922 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.922 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:24.922 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.922 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.922 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.922 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:24.922 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:25.181 21:20:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:25.440 21:20:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:26.377 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:26.377 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:26.377 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.377 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:26.636 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.636 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:26.637 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.637 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:26.637 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:26.637 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:26.637 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.637 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.896 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.896 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.896 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.896 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:27.155 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.155 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:27.155 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.155 21:20:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:27.155 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.155 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:27.155 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.155 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3724011 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3724011 ']' 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3724011 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3724011 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3724011' 00:33:27.415 killing process with pid 3724011 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3724011 00:33:27.415 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3724011 00:33:27.678 Connection closed with partial response: 00:33:27.678 00:33:27.678 00:33:27.678 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3724011 00:33:27.678 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:27.678 [2024-07-13 21:19:50.461556] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:27.678 [2024-07-13 21:19:50.461617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724011 ] 00:33:27.678 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.678 [2024-07-13 21:19:50.531341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.678 [2024-07-13 21:19:50.570189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:27.678 Running I/O for 90 seconds... 00:33:27.678 [2024-07-13 21:20:03.745574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.745984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.745996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.746023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.746060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.746083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.746106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.746126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.746149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.746170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.746191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:27.678 [2024-07-13 21:20:03.746212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.678 [2024-07-13 21:20:03.746221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x184100 00:33:27.679 [2024-07-13 21:20:03.746787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x184100 00:33:27.679 [2024-07-13 21:20:03.746809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x184100 00:33:27.679 [2024-07-13 21:20:03.746829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x184100 00:33:27.679 [2024-07-13 21:20:03.746850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x184100 00:33:27.679 [2024-07-13 21:20:03.746871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x184100 00:33:27.679 [2024-07-13 21:20:03.746892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x184100 00:33:27.679 [2024-07-13 21:20:03.746915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.746990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.746999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.747015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.747024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.747036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.747044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.747056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.747065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.679 [2024-07-13 21:20:03.747078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.679 [2024-07-13 21:20:03.747087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.747724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.747733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.680 [2024-07-13 21:20:03.748567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:27.680 [2024-07-13 21:20:03.748584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.748974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.748991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:03.749000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.749022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:03.749033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.749051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:03.749060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:03.749078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:03.749087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.123158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.123244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.123312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.123333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.123810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.123831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.123852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.123873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.123893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.123981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.123993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.124002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.124020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.681 [2024-07-13 21:20:16.124029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:27.681 [2024-07-13 21:20:16.124040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x184100 00:33:27.681 [2024-07-13 21:20:16.124049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.682 [2024-07-13 21:20:16.124739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184100 00:33:27.682 [2024-07-13 21:20:16.124804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:27.682 [2024-07-13 21:20:16.124816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184100 00:33:27.683 [2024-07-13 21:20:16.124825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.124837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x184100 00:33:27.683 [2024-07-13 21:20:16.124846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.124857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184100 00:33:27.683 [2024-07-13 21:20:16.124866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.124893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184100 00:33:27.683 [2024-07-13 21:20:16.124903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.124915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.683 [2024-07-13 21:20:16.124924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.124935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x184100 00:33:27.683 [2024-07-13 21:20:16.124945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.124957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.683 [2024-07-13 21:20:16.124966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.124978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.683 [2024-07-13 21:20:16.124987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.124998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.683 [2024-07-13 21:20:16.125008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.125026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184100 00:33:27.683 [2024-07-13 21:20:16.125037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.683 [2024-07-13 21:20:16.125048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.683 [2024-07-13 21:20:16.125057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.683 Received shutdown signal, test time was about 26.206528 seconds 00:33:27.683 00:33:27.683 Latency(us) 00:33:27.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.683 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:27.683 Verification LBA range: start 0x0 length 0x4000 00:33:27.683 Nvme0n1 : 26.21 15885.28 62.05 0.00 0.00 8038.16 66.76 3019898.88 00:33:27.683 =================================================================================================================== 00:33:27.683 Total : 15885.28 62.05 0.00 0.00 8038.16 66.76 3019898.88 00:33:27.683 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.942 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:27.943 rmmod nvme_rdma 00:33:27.943 rmmod nvme_fabrics 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3723537 ']' 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3723537 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3723537 ']' 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3723537 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3723537 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3723537' 00:33:27.943 killing process with pid 3723537 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3723537 00:33:27.943 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3723537 00:33:28.202 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:28.202 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:28.202 00:33:28.202 real 0m37.279s 00:33:28.202 user 1m44.827s 00:33:28.202 sys 0m9.137s 00:33:28.202 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:28.202 21:20:18 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:28.202 ************************************ 00:33:28.202 END TEST nvmf_host_multipath_status 00:33:28.202 ************************************ 00:33:28.202 21:20:19 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:33:28.202 21:20:19 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:28.202 21:20:19 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:28.202 21:20:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:28.202 ************************************ 00:33:28.202 START TEST nvmf_discovery_remove_ifc 00:33:28.202 ************************************ 00:33:28.202 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:33:28.462 * Looking for test storage... 00:33:28.462 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:33:28.462 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:33:28.462 00:33:28.462 real 0m0.144s 00:33:28.462 user 0m0.060s 00:33:28.462 sys 0m0.093s 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:28.462 21:20:19 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.462 ************************************ 00:33:28.462 END TEST nvmf_discovery_remove_ifc 00:33:28.463 ************************************ 00:33:28.463 21:20:19 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:33:28.463 21:20:19 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:28.463 21:20:19 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:28.463 21:20:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:28.463 ************************************ 00:33:28.463 START TEST nvmf_identify_kernel_target 00:33:28.463 ************************************ 00:33:28.463 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:33:28.722 * Looking for test storage... 00:33:28.722 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.722 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:28.723 21:20:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:35.294 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:35.294 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:35.294 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:35.295 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:35.295 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:35.295 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:35.295 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:35.295 altname enp217s0f0np0 00:33:35.295 altname ens818f0np0 00:33:35.295 inet 192.168.100.8/24 scope global mlx_0_0 00:33:35.295 valid_lft forever preferred_lft forever 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:35.295 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:35.295 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:35.295 altname enp217s0f1np1 00:33:35.295 altname ens818f1np1 00:33:35.295 inet 192.168.100.9/24 scope global mlx_0_1 00:33:35.295 valid_lft forever preferred_lft forever 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:35.295 192.168.100.9' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:35.295 192.168.100.9' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:35.295 192.168.100.9' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:35.295 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:35.296 21:20:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:33:37.830 Waiting for block devices as requested 00:33:37.830 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:37.830 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:38.090 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:38.090 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:38.090 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:38.349 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:38.349 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:38.349 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:38.349 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:38.608 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:38.608 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:38.608 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:38.867 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:38.867 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:38.867 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:39.125 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:39.125 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:39.383 No valid GPT data, bailing 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:39.383 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:39.384 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:33:39.643 00:33:39.643 Discovery Log Number of Records 2, Generation counter 2 00:33:39.643 =====Discovery Log Entry 0====== 00:33:39.643 trtype: rdma 00:33:39.643 adrfam: ipv4 00:33:39.643 subtype: current discovery subsystem 00:33:39.643 treq: not specified, sq flow control disable supported 00:33:39.643 portid: 1 00:33:39.643 trsvcid: 4420 00:33:39.643 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:39.643 traddr: 192.168.100.8 00:33:39.643 eflags: none 00:33:39.643 rdma_prtype: not specified 00:33:39.643 rdma_qptype: connected 00:33:39.643 rdma_cms: rdma-cm 00:33:39.643 rdma_pkey: 0x0000 00:33:39.643 =====Discovery Log Entry 1====== 00:33:39.643 trtype: rdma 00:33:39.643 adrfam: ipv4 00:33:39.643 subtype: nvme subsystem 00:33:39.643 treq: not specified, sq flow control disable supported 00:33:39.643 portid: 1 00:33:39.643 trsvcid: 4420 00:33:39.643 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:39.643 traddr: 192.168.100.8 00:33:39.643 eflags: none 00:33:39.643 rdma_prtype: not specified 00:33:39.643 rdma_qptype: connected 00:33:39.643 rdma_cms: rdma-cm 00:33:39.643 rdma_pkey: 0x0000 00:33:39.643 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:33:39.643 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:39.643 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.643 ===================================================== 00:33:39.643 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:39.643 ===================================================== 00:33:39.643 Controller Capabilities/Features 00:33:39.643 ================================ 00:33:39.643 Vendor ID: 0000 00:33:39.643 Subsystem Vendor ID: 0000 00:33:39.643 Serial Number: bcc6934ad01a7144815f 00:33:39.643 Model Number: Linux 00:33:39.643 Firmware Version: 6.7.0-68 00:33:39.643 Recommended Arb Burst: 0 00:33:39.643 IEEE OUI Identifier: 00 00 00 00:33:39.643 Multi-path I/O 00:33:39.643 May have multiple subsystem ports: No 00:33:39.643 May have multiple controllers: No 00:33:39.643 Associated with SR-IOV VF: No 00:33:39.643 Max Data Transfer Size: Unlimited 00:33:39.643 Max Number of Namespaces: 0 00:33:39.643 Max Number of I/O Queues: 1024 00:33:39.643 NVMe Specification Version (VS): 1.3 00:33:39.643 NVMe Specification Version (Identify): 1.3 00:33:39.643 Maximum Queue Entries: 128 00:33:39.643 Contiguous Queues Required: No 00:33:39.643 Arbitration Mechanisms Supported 00:33:39.643 Weighted Round Robin: Not Supported 00:33:39.643 Vendor Specific: Not Supported 00:33:39.643 Reset Timeout: 7500 ms 00:33:39.643 Doorbell Stride: 4 bytes 00:33:39.643 NVM Subsystem Reset: Not Supported 00:33:39.643 Command Sets Supported 00:33:39.643 NVM Command Set: Supported 00:33:39.643 Boot Partition: Not Supported 00:33:39.643 Memory Page Size Minimum: 4096 bytes 00:33:39.643 Memory Page Size Maximum: 4096 bytes 00:33:39.643 Persistent Memory Region: Not Supported 00:33:39.643 Optional Asynchronous Events Supported 00:33:39.643 Namespace Attribute Notices: Not Supported 00:33:39.643 Firmware Activation Notices: Not Supported 00:33:39.643 ANA Change Notices: Not Supported 00:33:39.643 PLE Aggregate Log Change Notices: Not Supported 00:33:39.643 LBA Status Info Alert Notices: Not Supported 00:33:39.643 EGE Aggregate Log Change Notices: Not Supported 00:33:39.643 Normal NVM Subsystem Shutdown event: Not Supported 00:33:39.643 Zone Descriptor Change Notices: Not Supported 00:33:39.643 Discovery Log Change Notices: Supported 00:33:39.643 Controller Attributes 00:33:39.643 128-bit Host Identifier: Not Supported 00:33:39.643 Non-Operational Permissive Mode: Not Supported 00:33:39.643 NVM Sets: Not Supported 00:33:39.643 Read Recovery Levels: Not Supported 00:33:39.643 Endurance Groups: Not Supported 00:33:39.643 Predictable Latency Mode: Not Supported 00:33:39.643 Traffic Based Keep ALive: Not Supported 00:33:39.643 Namespace Granularity: Not Supported 00:33:39.643 SQ Associations: Not Supported 00:33:39.643 UUID List: Not Supported 00:33:39.643 Multi-Domain Subsystem: Not Supported 00:33:39.643 Fixed Capacity Management: Not Supported 00:33:39.643 Variable Capacity Management: Not Supported 00:33:39.643 Delete Endurance Group: Not Supported 00:33:39.643 Delete NVM Set: Not Supported 00:33:39.643 Extended LBA Formats Supported: Not Supported 00:33:39.643 Flexible Data Placement Supported: Not Supported 00:33:39.643 00:33:39.643 Controller Memory Buffer Support 00:33:39.643 ================================ 00:33:39.643 Supported: No 00:33:39.643 00:33:39.643 Persistent Memory Region Support 00:33:39.643 ================================ 00:33:39.643 Supported: No 00:33:39.643 00:33:39.643 Admin Command Set Attributes 00:33:39.643 ============================ 00:33:39.643 Security Send/Receive: Not Supported 00:33:39.643 Format NVM: Not Supported 00:33:39.643 Firmware Activate/Download: Not Supported 00:33:39.643 Namespace Management: Not Supported 00:33:39.644 Device Self-Test: Not Supported 00:33:39.644 Directives: Not Supported 00:33:39.644 NVMe-MI: Not Supported 00:33:39.644 Virtualization Management: Not Supported 00:33:39.644 Doorbell Buffer Config: Not Supported 00:33:39.644 Get LBA Status Capability: Not Supported 00:33:39.644 Command & Feature Lockdown Capability: Not Supported 00:33:39.644 Abort Command Limit: 1 00:33:39.644 Async Event Request Limit: 1 00:33:39.644 Number of Firmware Slots: N/A 00:33:39.644 Firmware Slot 1 Read-Only: N/A 00:33:39.644 Firmware Activation Without Reset: N/A 00:33:39.644 Multiple Update Detection Support: N/A 00:33:39.644 Firmware Update Granularity: No Information Provided 00:33:39.644 Per-Namespace SMART Log: No 00:33:39.644 Asymmetric Namespace Access Log Page: Not Supported 00:33:39.644 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:39.644 Command Effects Log Page: Not Supported 00:33:39.644 Get Log Page Extended Data: Supported 00:33:39.644 Telemetry Log Pages: Not Supported 00:33:39.644 Persistent Event Log Pages: Not Supported 00:33:39.644 Supported Log Pages Log Page: May Support 00:33:39.644 Commands Supported & Effects Log Page: Not Supported 00:33:39.644 Feature Identifiers & Effects Log Page:May Support 00:33:39.644 NVMe-MI Commands & Effects Log Page: May Support 00:33:39.644 Data Area 4 for Telemetry Log: Not Supported 00:33:39.644 Error Log Page Entries Supported: 1 00:33:39.644 Keep Alive: Not Supported 00:33:39.644 00:33:39.644 NVM Command Set Attributes 00:33:39.644 ========================== 00:33:39.644 Submission Queue Entry Size 00:33:39.644 Max: 1 00:33:39.644 Min: 1 00:33:39.644 Completion Queue Entry Size 00:33:39.644 Max: 1 00:33:39.644 Min: 1 00:33:39.644 Number of Namespaces: 0 00:33:39.644 Compare Command: Not Supported 00:33:39.644 Write Uncorrectable Command: Not Supported 00:33:39.644 Dataset Management Command: Not Supported 00:33:39.644 Write Zeroes Command: Not Supported 00:33:39.644 Set Features Save Field: Not Supported 00:33:39.644 Reservations: Not Supported 00:33:39.644 Timestamp: Not Supported 00:33:39.644 Copy: Not Supported 00:33:39.644 Volatile Write Cache: Not Present 00:33:39.644 Atomic Write Unit (Normal): 1 00:33:39.644 Atomic Write Unit (PFail): 1 00:33:39.644 Atomic Compare & Write Unit: 1 00:33:39.644 Fused Compare & Write: Not Supported 00:33:39.644 Scatter-Gather List 00:33:39.644 SGL Command Set: Supported 00:33:39.644 SGL Keyed: Supported 00:33:39.644 SGL Bit Bucket Descriptor: Not Supported 00:33:39.644 SGL Metadata Pointer: Not Supported 00:33:39.644 Oversized SGL: Not Supported 00:33:39.644 SGL Metadata Address: Not Supported 00:33:39.644 SGL Offset: Supported 00:33:39.644 Transport SGL Data Block: Not Supported 00:33:39.644 Replay Protected Memory Block: Not Supported 00:33:39.644 00:33:39.644 Firmware Slot Information 00:33:39.644 ========================= 00:33:39.644 Active slot: 0 00:33:39.644 00:33:39.644 00:33:39.644 Error Log 00:33:39.644 ========= 00:33:39.644 00:33:39.644 Active Namespaces 00:33:39.644 ================= 00:33:39.644 Discovery Log Page 00:33:39.644 ================== 00:33:39.644 Generation Counter: 2 00:33:39.644 Number of Records: 2 00:33:39.644 Record Format: 0 00:33:39.644 00:33:39.644 Discovery Log Entry 0 00:33:39.644 ---------------------- 00:33:39.644 Transport Type: 1 (RDMA) 00:33:39.644 Address Family: 1 (IPv4) 00:33:39.644 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:39.644 Entry Flags: 00:33:39.644 Duplicate Returned Information: 0 00:33:39.644 Explicit Persistent Connection Support for Discovery: 0 00:33:39.644 Transport Requirements: 00:33:39.644 Secure Channel: Not Specified 00:33:39.644 Port ID: 1 (0x0001) 00:33:39.644 Controller ID: 65535 (0xffff) 00:33:39.644 Admin Max SQ Size: 32 00:33:39.644 Transport Service Identifier: 4420 00:33:39.644 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:39.644 Transport Address: 192.168.100.8 00:33:39.644 Transport Specific Address Subtype - RDMA 00:33:39.644 RDMA QP Service Type: 1 (Reliable Connected) 00:33:39.644 RDMA Provider Type: 1 (No provider specified) 00:33:39.644 RDMA CM Service: 1 (RDMA_CM) 00:33:39.644 Discovery Log Entry 1 00:33:39.644 ---------------------- 00:33:39.644 Transport Type: 1 (RDMA) 00:33:39.644 Address Family: 1 (IPv4) 00:33:39.644 Subsystem Type: 2 (NVM Subsystem) 00:33:39.644 Entry Flags: 00:33:39.644 Duplicate Returned Information: 0 00:33:39.644 Explicit Persistent Connection Support for Discovery: 0 00:33:39.644 Transport Requirements: 00:33:39.644 Secure Channel: Not Specified 00:33:39.644 Port ID: 1 (0x0001) 00:33:39.644 Controller ID: 65535 (0xffff) 00:33:39.644 Admin Max SQ Size: 32 00:33:39.644 Transport Service Identifier: 4420 00:33:39.644 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:39.644 Transport Address: 192.168.100.8 00:33:39.644 Transport Specific Address Subtype - RDMA 00:33:39.644 RDMA QP Service Type: 1 (Reliable Connected) 00:33:39.644 RDMA Provider Type: 1 (No provider specified) 00:33:39.644 RDMA CM Service: 1 (RDMA_CM) 00:33:39.644 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:39.644 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.644 get_feature(0x01) failed 00:33:39.644 get_feature(0x02) failed 00:33:39.644 get_feature(0x04) failed 00:33:39.644 ===================================================== 00:33:39.644 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:33:39.644 ===================================================== 00:33:39.644 Controller Capabilities/Features 00:33:39.644 ================================ 00:33:39.644 Vendor ID: 0000 00:33:39.644 Subsystem Vendor ID: 0000 00:33:39.644 Serial Number: 6f70ee6c4bf910d5ccb8 00:33:39.644 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:39.644 Firmware Version: 6.7.0-68 00:33:39.644 Recommended Arb Burst: 6 00:33:39.644 IEEE OUI Identifier: 00 00 00 00:33:39.644 Multi-path I/O 00:33:39.644 May have multiple subsystem ports: Yes 00:33:39.644 May have multiple controllers: Yes 00:33:39.644 Associated with SR-IOV VF: No 00:33:39.644 Max Data Transfer Size: 1048576 00:33:39.644 Max Number of Namespaces: 1024 00:33:39.644 Max Number of I/O Queues: 128 00:33:39.644 NVMe Specification Version (VS): 1.3 00:33:39.644 NVMe Specification Version (Identify): 1.3 00:33:39.644 Maximum Queue Entries: 128 00:33:39.644 Contiguous Queues Required: No 00:33:39.644 Arbitration Mechanisms Supported 00:33:39.644 Weighted Round Robin: Not Supported 00:33:39.644 Vendor Specific: Not Supported 00:33:39.644 Reset Timeout: 7500 ms 00:33:39.644 Doorbell Stride: 4 bytes 00:33:39.644 NVM Subsystem Reset: Not Supported 00:33:39.644 Command Sets Supported 00:33:39.644 NVM Command Set: Supported 00:33:39.644 Boot Partition: Not Supported 00:33:39.644 Memory Page Size Minimum: 4096 bytes 00:33:39.644 Memory Page Size Maximum: 4096 bytes 00:33:39.644 Persistent Memory Region: Not Supported 00:33:39.644 Optional Asynchronous Events Supported 00:33:39.644 Namespace Attribute Notices: Supported 00:33:39.644 Firmware Activation Notices: Not Supported 00:33:39.644 ANA Change Notices: Supported 00:33:39.644 PLE Aggregate Log Change Notices: Not Supported 00:33:39.644 LBA Status Info Alert Notices: Not Supported 00:33:39.644 EGE Aggregate Log Change Notices: Not Supported 00:33:39.644 Normal NVM Subsystem Shutdown event: Not Supported 00:33:39.644 Zone Descriptor Change Notices: Not Supported 00:33:39.644 Discovery Log Change Notices: Not Supported 00:33:39.644 Controller Attributes 00:33:39.644 128-bit Host Identifier: Supported 00:33:39.644 Non-Operational Permissive Mode: Not Supported 00:33:39.644 NVM Sets: Not Supported 00:33:39.644 Read Recovery Levels: Not Supported 00:33:39.644 Endurance Groups: Not Supported 00:33:39.644 Predictable Latency Mode: Not Supported 00:33:39.644 Traffic Based Keep ALive: Supported 00:33:39.644 Namespace Granularity: Not Supported 00:33:39.644 SQ Associations: Not Supported 00:33:39.644 UUID List: Not Supported 00:33:39.644 Multi-Domain Subsystem: Not Supported 00:33:39.644 Fixed Capacity Management: Not Supported 00:33:39.644 Variable Capacity Management: Not Supported 00:33:39.644 Delete Endurance Group: Not Supported 00:33:39.644 Delete NVM Set: Not Supported 00:33:39.644 Extended LBA Formats Supported: Not Supported 00:33:39.644 Flexible Data Placement Supported: Not Supported 00:33:39.644 00:33:39.644 Controller Memory Buffer Support 00:33:39.644 ================================ 00:33:39.644 Supported: No 00:33:39.644 00:33:39.644 Persistent Memory Region Support 00:33:39.644 ================================ 00:33:39.644 Supported: No 00:33:39.644 00:33:39.644 Admin Command Set Attributes 00:33:39.644 ============================ 00:33:39.644 Security Send/Receive: Not Supported 00:33:39.644 Format NVM: Not Supported 00:33:39.644 Firmware Activate/Download: Not Supported 00:33:39.644 Namespace Management: Not Supported 00:33:39.644 Device Self-Test: Not Supported 00:33:39.644 Directives: Not Supported 00:33:39.644 NVMe-MI: Not Supported 00:33:39.644 Virtualization Management: Not Supported 00:33:39.645 Doorbell Buffer Config: Not Supported 00:33:39.645 Get LBA Status Capability: Not Supported 00:33:39.645 Command & Feature Lockdown Capability: Not Supported 00:33:39.645 Abort Command Limit: 4 00:33:39.645 Async Event Request Limit: 4 00:33:39.645 Number of Firmware Slots: N/A 00:33:39.645 Firmware Slot 1 Read-Only: N/A 00:33:39.645 Firmware Activation Without Reset: N/A 00:33:39.645 Multiple Update Detection Support: N/A 00:33:39.645 Firmware Update Granularity: No Information Provided 00:33:39.645 Per-Namespace SMART Log: Yes 00:33:39.645 Asymmetric Namespace Access Log Page: Supported 00:33:39.645 ANA Transition Time : 10 sec 00:33:39.645 00:33:39.645 Asymmetric Namespace Access Capabilities 00:33:39.645 ANA Optimized State : Supported 00:33:39.645 ANA Non-Optimized State : Supported 00:33:39.645 ANA Inaccessible State : Supported 00:33:39.645 ANA Persistent Loss State : Supported 00:33:39.645 ANA Change State : Supported 00:33:39.645 ANAGRPID is not changed : No 00:33:39.645 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:39.645 00:33:39.645 ANA Group Identifier Maximum : 128 00:33:39.645 Number of ANA Group Identifiers : 128 00:33:39.645 Max Number of Allowed Namespaces : 1024 00:33:39.645 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:39.645 Command Effects Log Page: Supported 00:33:39.645 Get Log Page Extended Data: Supported 00:33:39.645 Telemetry Log Pages: Not Supported 00:33:39.645 Persistent Event Log Pages: Not Supported 00:33:39.645 Supported Log Pages Log Page: May Support 00:33:39.645 Commands Supported & Effects Log Page: Not Supported 00:33:39.645 Feature Identifiers & Effects Log Page:May Support 00:33:39.645 NVMe-MI Commands & Effects Log Page: May Support 00:33:39.645 Data Area 4 for Telemetry Log: Not Supported 00:33:39.645 Error Log Page Entries Supported: 128 00:33:39.645 Keep Alive: Supported 00:33:39.645 Keep Alive Granularity: 1000 ms 00:33:39.645 00:33:39.645 NVM Command Set Attributes 00:33:39.645 ========================== 00:33:39.645 Submission Queue Entry Size 00:33:39.645 Max: 64 00:33:39.645 Min: 64 00:33:39.645 Completion Queue Entry Size 00:33:39.645 Max: 16 00:33:39.645 Min: 16 00:33:39.645 Number of Namespaces: 1024 00:33:39.645 Compare Command: Not Supported 00:33:39.645 Write Uncorrectable Command: Not Supported 00:33:39.645 Dataset Management Command: Supported 00:33:39.645 Write Zeroes Command: Supported 00:33:39.645 Set Features Save Field: Not Supported 00:33:39.645 Reservations: Not Supported 00:33:39.645 Timestamp: Not Supported 00:33:39.645 Copy: Not Supported 00:33:39.645 Volatile Write Cache: Present 00:33:39.645 Atomic Write Unit (Normal): 1 00:33:39.645 Atomic Write Unit (PFail): 1 00:33:39.645 Atomic Compare & Write Unit: 1 00:33:39.645 Fused Compare & Write: Not Supported 00:33:39.645 Scatter-Gather List 00:33:39.645 SGL Command Set: Supported 00:33:39.645 SGL Keyed: Supported 00:33:39.645 SGL Bit Bucket Descriptor: Not Supported 00:33:39.645 SGL Metadata Pointer: Not Supported 00:33:39.645 Oversized SGL: Not Supported 00:33:39.645 SGL Metadata Address: Not Supported 00:33:39.645 SGL Offset: Supported 00:33:39.645 Transport SGL Data Block: Not Supported 00:33:39.645 Replay Protected Memory Block: Not Supported 00:33:39.645 00:33:39.645 Firmware Slot Information 00:33:39.645 ========================= 00:33:39.645 Active slot: 0 00:33:39.645 00:33:39.645 Asymmetric Namespace Access 00:33:39.645 =========================== 00:33:39.645 Change Count : 0 00:33:39.645 Number of ANA Group Descriptors : 1 00:33:39.645 ANA Group Descriptor : 0 00:33:39.645 ANA Group ID : 1 00:33:39.645 Number of NSID Values : 1 00:33:39.645 Change Count : 0 00:33:39.645 ANA State : 1 00:33:39.645 Namespace Identifier : 1 00:33:39.645 00:33:39.645 Commands Supported and Effects 00:33:39.645 ============================== 00:33:39.645 Admin Commands 00:33:39.645 -------------- 00:33:39.645 Get Log Page (02h): Supported 00:33:39.645 Identify (06h): Supported 00:33:39.645 Abort (08h): Supported 00:33:39.645 Set Features (09h): Supported 00:33:39.645 Get Features (0Ah): Supported 00:33:39.645 Asynchronous Event Request (0Ch): Supported 00:33:39.645 Keep Alive (18h): Supported 00:33:39.645 I/O Commands 00:33:39.645 ------------ 00:33:39.645 Flush (00h): Supported 00:33:39.645 Write (01h): Supported LBA-Change 00:33:39.645 Read (02h): Supported 00:33:39.645 Write Zeroes (08h): Supported LBA-Change 00:33:39.645 Dataset Management (09h): Supported 00:33:39.645 00:33:39.645 Error Log 00:33:39.645 ========= 00:33:39.645 Entry: 0 00:33:39.645 Error Count: 0x3 00:33:39.645 Submission Queue Id: 0x0 00:33:39.645 Command Id: 0x5 00:33:39.645 Phase Bit: 0 00:33:39.645 Status Code: 0x2 00:33:39.645 Status Code Type: 0x0 00:33:39.645 Do Not Retry: 1 00:33:39.904 Error Location: 0x28 00:33:39.904 LBA: 0x0 00:33:39.904 Namespace: 0x0 00:33:39.904 Vendor Log Page: 0x0 00:33:39.904 ----------- 00:33:39.904 Entry: 1 00:33:39.904 Error Count: 0x2 00:33:39.904 Submission Queue Id: 0x0 00:33:39.904 Command Id: 0x5 00:33:39.904 Phase Bit: 0 00:33:39.904 Status Code: 0x2 00:33:39.904 Status Code Type: 0x0 00:33:39.904 Do Not Retry: 1 00:33:39.904 Error Location: 0x28 00:33:39.904 LBA: 0x0 00:33:39.904 Namespace: 0x0 00:33:39.904 Vendor Log Page: 0x0 00:33:39.904 ----------- 00:33:39.904 Entry: 2 00:33:39.904 Error Count: 0x1 00:33:39.904 Submission Queue Id: 0x0 00:33:39.904 Command Id: 0x0 00:33:39.904 Phase Bit: 0 00:33:39.904 Status Code: 0x2 00:33:39.904 Status Code Type: 0x0 00:33:39.904 Do Not Retry: 1 00:33:39.904 Error Location: 0x28 00:33:39.904 LBA: 0x0 00:33:39.904 Namespace: 0x0 00:33:39.904 Vendor Log Page: 0x0 00:33:39.904 00:33:39.904 Number of Queues 00:33:39.904 ================ 00:33:39.904 Number of I/O Submission Queues: 128 00:33:39.904 Number of I/O Completion Queues: 128 00:33:39.904 00:33:39.904 ZNS Specific Controller Data 00:33:39.904 ============================ 00:33:39.904 Zone Append Size Limit: 0 00:33:39.904 00:33:39.904 00:33:39.904 Active Namespaces 00:33:39.904 ================= 00:33:39.904 get_feature(0x05) failed 00:33:39.904 Namespace ID:1 00:33:39.904 Command Set Identifier: NVM (00h) 00:33:39.904 Deallocate: Supported 00:33:39.904 Deallocated/Unwritten Error: Not Supported 00:33:39.904 Deallocated Read Value: Unknown 00:33:39.904 Deallocate in Write Zeroes: Not Supported 00:33:39.904 Deallocated Guard Field: 0xFFFF 00:33:39.904 Flush: Supported 00:33:39.904 Reservation: Not Supported 00:33:39.904 Namespace Sharing Capabilities: Multiple Controllers 00:33:39.904 Size (in LBAs): 3907029168 (1863GiB) 00:33:39.904 Capacity (in LBAs): 3907029168 (1863GiB) 00:33:39.904 Utilization (in LBAs): 3907029168 (1863GiB) 00:33:39.904 UUID: a58723b0-f57c-45e3-821f-9d3767959040 00:33:39.904 Thin Provisioning: Not Supported 00:33:39.904 Per-NS Atomic Units: Yes 00:33:39.904 Atomic Boundary Size (Normal): 0 00:33:39.904 Atomic Boundary Size (PFail): 0 00:33:39.904 Atomic Boundary Offset: 0 00:33:39.904 NGUID/EUI64 Never Reused: No 00:33:39.904 ANA group ID: 1 00:33:39.904 Namespace Write Protected: No 00:33:39.904 Number of LBA Formats: 1 00:33:39.904 Current LBA Format: LBA Format #00 00:33:39.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:39.904 00:33:39.904 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:39.904 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:39.904 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:39.904 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:39.904 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:39.905 rmmod nvme_rdma 00:33:39.905 rmmod nvme_fabrics 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:33:39.905 21:20:30 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:33:43.199 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:43.199 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:44.605 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:33:44.864 00:33:44.864 real 0m16.265s 00:33:44.864 user 0m4.042s 00:33:44.864 sys 0m9.508s 00:33:44.864 21:20:35 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:44.864 21:20:35 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.864 ************************************ 00:33:44.864 END TEST nvmf_identify_kernel_target 00:33:44.864 ************************************ 00:33:44.864 21:20:35 nvmf_rdma -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:33:44.864 21:20:35 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:44.864 21:20:35 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:44.864 21:20:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:44.864 ************************************ 00:33:44.864 START TEST nvmf_auth_host 00:33:44.864 ************************************ 00:33:44.864 21:20:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:33:44.864 * Looking for test storage... 00:33:44.864 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:44.864 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.864 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:45.124 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:45.125 21:20:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.695 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:51.696 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:51.696 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:51.696 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:51.696 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:51.696 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:51.696 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:51.696 altname enp217s0f0np0 00:33:51.696 altname ens818f0np0 00:33:51.696 inet 192.168.100.8/24 scope global mlx_0_0 00:33:51.696 valid_lft forever preferred_lft forever 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:51.696 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:51.696 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:51.696 altname enp217s0f1np1 00:33:51.696 altname ens818f1np1 00:33:51.696 inet 192.168.100.9/24 scope global mlx_0_1 00:33:51.696 valid_lft forever preferred_lft forever 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:51.696 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:51.697 192.168.100.9' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:51.697 192.168.100.9' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:51.697 192.168.100.9' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3738340 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3738340 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3738340 ']' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:51.697 21:20:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b954b3ee7c07f7ef26753d8b63e0e14b 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ng5 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b954b3ee7c07f7ef26753d8b63e0e14b 0 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b954b3ee7c07f7ef26753d8b63e0e14b 0 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b954b3ee7c07f7ef26753d8b63e0e14b 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ng5 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ng5 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Ng5 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:52.633 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b484032b07bc1d28fbc3c694d1de4766afe873e5bf4e68e6fcf694876f815cc6 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NpN 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b484032b07bc1d28fbc3c694d1de4766afe873e5bf4e68e6fcf694876f815cc6 3 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b484032b07bc1d28fbc3c694d1de4766afe873e5bf4e68e6fcf694876f815cc6 3 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b484032b07bc1d28fbc3c694d1de4766afe873e5bf4e68e6fcf694876f815cc6 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NpN 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NpN 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.NpN 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6cad119573f13a5e79d338a5185e6e77af7f2c25fe71afae 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OCW 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6cad119573f13a5e79d338a5185e6e77af7f2c25fe71afae 0 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6cad119573f13a5e79d338a5185e6e77af7f2c25fe71afae 0 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6cad119573f13a5e79d338a5185e6e77af7f2c25fe71afae 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OCW 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OCW 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.OCW 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=244b4f8cc049ed931982214b5e22037fb2868bc62eae87e3 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.brO 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 244b4f8cc049ed931982214b5e22037fb2868bc62eae87e3 2 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 244b4f8cc049ed931982214b5e22037fb2868bc62eae87e3 2 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=244b4f8cc049ed931982214b5e22037fb2868bc62eae87e3 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:52.634 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.brO 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.brO 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.brO 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3c926823a04e03e87737fd9fbeca5e9c 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Jgg 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3c926823a04e03e87737fd9fbeca5e9c 1 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3c926823a04e03e87737fd9fbeca5e9c 1 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3c926823a04e03e87737fd9fbeca5e9c 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Jgg 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Jgg 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Jgg 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e720a1a7360fbeda8609c40a9a53d83 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oox 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e720a1a7360fbeda8609c40a9a53d83 1 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e720a1a7360fbeda8609c40a9a53d83 1 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e720a1a7360fbeda8609c40a9a53d83 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oox 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oox 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.oox 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4a81b2629345523555bb3c95de310e3f6058864a12f97a4b 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xtb 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4a81b2629345523555bb3c95de310e3f6058864a12f97a4b 2 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4a81b2629345523555bb3c95de310e3f6058864a12f97a4b 2 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4a81b2629345523555bb3c95de310e3f6058864a12f97a4b 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xtb 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xtb 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xtb 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:52.892 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=58f1980b5ab73966d3f18cfb762de810 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.D4D 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 58f1980b5ab73966d3f18cfb762de810 0 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 58f1980b5ab73966d3f18cfb762de810 0 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=58f1980b5ab73966d3f18cfb762de810 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:52.893 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.D4D 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.D4D 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.D4D 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=afa363aa90728f44ccb659241cf398d2cc45bd10680fef058221bfcec6e72a7e 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Si6 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key afa363aa90728f44ccb659241cf398d2cc45bd10680fef058221bfcec6e72a7e 3 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 afa363aa90728f44ccb659241cf398d2cc45bd10680fef058221bfcec6e72a7e 3 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=afa363aa90728f44ccb659241cf398d2cc45bd10680fef058221bfcec6e72a7e 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Si6 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Si6 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Si6 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3738340 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3738340 ']' 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:53.152 21:20:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ng5 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.NpN ]] 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NpN 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.152 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.OCW 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.brO ]] 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.brO 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Jgg 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.oox ]] 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oox 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.411 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xtb 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.D4D ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.D4D 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Si6 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:53.412 21:20:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:33:56.700 Waiting for block devices as requested 00:33:56.700 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:56.700 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:56.700 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:56.700 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:56.700 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:56.700 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:56.959 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:56.959 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:56.959 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:57.218 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:57.218 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:57.218 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:57.218 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:57.477 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:57.477 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:57.477 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:57.736 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:58.304 No valid GPT data, bailing 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:58.304 21:20:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:33:58.305 21:20:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:33:58.305 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:58.305 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:58.305 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:58.305 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:33:58.564 00:33:58.564 Discovery Log Number of Records 2, Generation counter 2 00:33:58.564 =====Discovery Log Entry 0====== 00:33:58.564 trtype: rdma 00:33:58.564 adrfam: ipv4 00:33:58.564 subtype: current discovery subsystem 00:33:58.564 treq: not specified, sq flow control disable supported 00:33:58.564 portid: 1 00:33:58.564 trsvcid: 4420 00:33:58.564 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:58.564 traddr: 192.168.100.8 00:33:58.564 eflags: none 00:33:58.564 rdma_prtype: not specified 00:33:58.564 rdma_qptype: connected 00:33:58.564 rdma_cms: rdma-cm 00:33:58.564 rdma_pkey: 0x0000 00:33:58.564 =====Discovery Log Entry 1====== 00:33:58.564 trtype: rdma 00:33:58.564 adrfam: ipv4 00:33:58.564 subtype: nvme subsystem 00:33:58.564 treq: not specified, sq flow control disable supported 00:33:58.564 portid: 1 00:33:58.564 trsvcid: 4420 00:33:58.564 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:58.564 traddr: 192.168.100.8 00:33:58.564 eflags: none 00:33:58.564 rdma_prtype: not specified 00:33:58.564 rdma_qptype: connected 00:33:58.564 rdma_cms: rdma-cm 00:33:58.564 rdma_pkey: 0x0000 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:58.564 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.565 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.825 nvme0n1 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.825 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.084 nvme0n1 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.084 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.085 21:20:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.345 nvme0n1 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.345 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.604 nvme0n1 00:33:59.604 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.604 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.604 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.604 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.604 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.605 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.864 nvme0n1 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.864 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.124 nvme0n1 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.124 21:20:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.384 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.643 nvme0n1 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.643 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.902 nvme0n1 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.902 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.903 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.162 nvme0n1 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.162 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.163 21:20:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 nvme0n1 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.422 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.423 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.682 nvme0n1 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.682 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:01.941 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.942 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.201 nvme0n1 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.201 21:20:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.460 nvme0n1 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:02.460 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:02.461 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:02.461 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:02.461 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.461 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.461 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.719 nvme0n1 00:34:02.719 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.978 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.238 nvme0n1 00:34:03.238 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.238 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.238 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.238 21:20:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.238 21:20:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.238 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.497 nvme0n1 00:34:03.497 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.497 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.497 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.497 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.497 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.755 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.756 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.015 nvme0n1 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.015 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.274 21:20:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.533 nvme0n1 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.533 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.821 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.081 nvme0n1 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.081 21:20:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.648 nvme0n1 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.648 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.216 nvme0n1 00:34:06.216 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.216 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.216 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.216 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.216 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.217 21:20:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.784 nvme0n1 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.784 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.785 21:20:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.353 nvme0n1 00:34:07.353 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.353 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.353 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.353 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.353 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.353 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.612 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.180 nvme0n1 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.180 21:20:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.747 nvme0n1 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:08.747 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.006 21:20:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.573 nvme0n1 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:09.573 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.574 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.832 nvme0n1 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.832 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 nvme0n1 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:10.090 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.091 21:21:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.348 nvme0n1 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:10.348 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.349 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.607 nvme0n1 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.607 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.608 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.866 nvme0n1 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.866 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.125 nvme0n1 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.125 21:21:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.384 nvme0n1 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.384 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.643 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.902 nvme0n1 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.902 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.903 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.163 nvme0n1 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.163 21:21:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.435 nvme0n1 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.435 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.698 nvme0n1 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.698 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.956 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.215 nvme0n1 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.215 21:21:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.474 nvme0n1 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.474 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.041 nvme0n1 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.041 21:21:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.300 nvme0n1 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.300 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.868 nvme0n1 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.868 21:21:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.435 nvme0n1 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.435 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.694 nvme0n1 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.694 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.953 21:21:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.211 nvme0n1 00:34:16.211 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.211 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.211 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.211 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.212 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.470 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.729 nvme0n1 00:34:16.729 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.730 21:21:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.665 nvme0n1 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.665 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.666 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.231 nvme0n1 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.232 21:21:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.798 nvme0n1 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.798 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.799 21:21:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.735 nvme0n1 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.735 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.304 nvme0n1 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.304 21:21:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.304 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.305 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.619 nvme0n1 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.619 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.878 nvme0n1 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.878 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.138 nvme0n1 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.138 21:21:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.397 nvme0n1 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.397 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.398 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.657 nvme0n1 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.657 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.917 nvme0n1 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.917 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.918 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.178 nvme0n1 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.178 21:21:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.178 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.438 nvme0n1 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.438 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.696 nvme0n1 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.696 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.954 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.213 nvme0n1 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:23.213 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.214 21:21:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.472 nvme0n1 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.472 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.039 nvme0n1 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.039 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.040 21:21:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.299 nvme0n1 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.299 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.558 nvme0n1 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.558 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.126 nvme0n1 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.126 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.127 21:21:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.385 nvme0n1 00:34:25.385 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.385 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.385 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.385 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.386 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.386 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.645 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.904 nvme0n1 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.904 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.163 21:21:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.422 nvme0n1 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.422 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.423 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.991 nvme0n1 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.991 21:21:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.559 nvme0n1 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk1NGIzZWU3YzA3ZjdlZjI2NzUzZDhiNjNlMGUxNGLZmRT9: 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: ]] 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjQ4NDAzMmIwN2JjMWQyOGZiYzNjNjk0ZDFkZTQ3NjZhZmU4NzNlNWJmNGU2OGU2ZmNmNjk0ODc2ZjgxNWNjNrYIGW0=: 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.559 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.560 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.127 nvme0n1 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.127 21:21:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.063 nvme0n1 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2M5MjY4MjNhMDRlMDNlODc3MzdmZDlmYmVjYTVlOWM/zLwr: 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: ]] 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3MjBhMWE3MzYwZmJlZGE4NjA5YzQwYTlhNTNkODPdk9C8: 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.063 21:21:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.630 nvme0n1 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGE4MWIyNjI5MzQ1NTIzNTU1YmIzYzk1ZGUzMTBlM2Y2MDU4ODY0YTEyZjk3YTRiYIHCLg==: 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: ]] 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NThmMTk4MGI1YWI3Mzk2NmQzZjE4Y2ZiNzYyZGU4MTB8NlTh: 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.630 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.227 nvme0n1 00:34:30.227 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.227 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.227 21:21:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.227 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.227 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.227 21:21:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWZhMzYzYWE5MDcyOGY0NGNjYjY1OTI0MWNmMzk4ZDJjYzQ1YmQxMDY4MGZlZjA1ODIyMWJmY2VjNmU3MmE3ZeHjIxo=: 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.227 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.794 nvme0n1 00:34:30.794 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.794 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.794 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.794 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.794 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.794 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNhZDExOTU3M2YxM2E1ZTc5ZDMzOGE1MTg1ZTZlNzdhZjdmMmMyNWZlNzFhZmFl8EngIA==: 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: ]] 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjQ0YjRmOGNjMDQ5ZWQ5MzE5ODIyMTRiNWUyMjAzN2ZiMjg2OGJjNjJlYWU4N2Uzff70Ow==: 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:31.053 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.054 request: 00:34:31.054 { 00:34:31.054 "name": "nvme0", 00:34:31.054 "trtype": "rdma", 00:34:31.054 "traddr": "192.168.100.8", 00:34:31.054 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:31.054 "adrfam": "ipv4", 00:34:31.054 "trsvcid": "4420", 00:34:31.054 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:31.054 "method": "bdev_nvme_attach_controller", 00:34:31.054 "req_id": 1 00:34:31.054 } 00:34:31.054 Got JSON-RPC error response 00:34:31.054 response: 00:34:31.054 { 00:34:31.054 "code": -5, 00:34:31.054 "message": "Input/output error" 00:34:31.054 } 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.054 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.313 request: 00:34:31.313 { 00:34:31.313 "name": "nvme0", 00:34:31.313 "trtype": "rdma", 00:34:31.313 "traddr": "192.168.100.8", 00:34:31.313 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:31.313 "adrfam": "ipv4", 00:34:31.313 "trsvcid": "4420", 00:34:31.313 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:31.313 "dhchap_key": "key2", 00:34:31.313 "method": "bdev_nvme_attach_controller", 00:34:31.313 "req_id": 1 00:34:31.313 } 00:34:31.313 Got JSON-RPC error response 00:34:31.313 response: 00:34:31.313 { 00:34:31.313 "code": -5, 00:34:31.313 "message": "Input/output error" 00:34:31.313 } 00:34:31.313 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:31.313 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:31.313 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:31.313 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:31.313 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:31.313 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.313 21:21:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:31.313 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.313 21:21:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:31.313 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.314 request: 00:34:31.314 { 00:34:31.314 "name": "nvme0", 00:34:31.314 "trtype": "rdma", 00:34:31.314 "traddr": "192.168.100.8", 00:34:31.314 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:31.314 "adrfam": "ipv4", 00:34:31.314 "trsvcid": "4420", 00:34:31.314 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:31.314 "dhchap_key": "key1", 00:34:31.314 "dhchap_ctrlr_key": "ckey2", 00:34:31.314 "method": "bdev_nvme_attach_controller", 00:34:31.314 "req_id": 1 00:34:31.314 } 00:34:31.314 Got JSON-RPC error response 00:34:31.314 response: 00:34:31.314 { 00:34:31.314 "code": -5, 00:34:31.314 "message": "Input/output error" 00:34:31.314 } 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:34:31.314 rmmod nvme_rdma 00:34:31.314 rmmod nvme_fabrics 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3738340 ']' 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3738340 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3738340 ']' 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3738340 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:31.314 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3738340 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3738340' 00:34:31.573 killing process with pid 3738340 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3738340 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3738340 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:31.573 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:34:31.832 21:21:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:34:35.123 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:35.123 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:37.029 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:34:37.029 21:21:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Ng5 /tmp/spdk.key-null.OCW /tmp/spdk.key-sha256.Jgg /tmp/spdk.key-sha384.xtb /tmp/spdk.key-sha512.Si6 /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:34:37.029 21:21:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:34:39.606 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:39.606 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:39.606 00:34:39.606 real 0m54.829s 00:34:39.606 user 0m48.090s 00:34:39.606 sys 0m14.374s 00:34:39.606 21:21:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:39.606 21:21:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.606 ************************************ 00:34:39.606 END TEST nvmf_auth_host 00:34:39.606 ************************************ 00:34:39.866 21:21:30 nvmf_rdma -- nvmf/nvmf.sh@107 -- # [[ rdma == \t\c\p ]] 00:34:39.866 21:21:30 nvmf_rdma -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:39.866 21:21:30 nvmf_rdma -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:39.866 21:21:30 nvmf_rdma -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:39.866 21:21:30 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:34:39.866 21:21:30 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:39.866 21:21:30 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:39.866 21:21:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:39.866 ************************************ 00:34:39.866 START TEST nvmf_bdevperf 00:34:39.866 ************************************ 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:34:39.866 * Looking for test storage... 00:34:39.866 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:39.866 21:21:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:46.433 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:46.433 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:46.434 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:46.434 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:46.434 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:46.434 21:21:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:46.434 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:46.434 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:46.434 altname enp217s0f0np0 00:34:46.434 altname ens818f0np0 00:34:46.434 inet 192.168.100.8/24 scope global mlx_0_0 00:34:46.434 valid_lft forever preferred_lft forever 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:46.434 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:46.434 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:46.434 altname enp217s0f1np1 00:34:46.434 altname ens818f1np1 00:34:46.434 inet 192.168.100.9/24 scope global mlx_0_1 00:34:46.434 valid_lft forever preferred_lft forever 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:46.434 192.168.100.9' 00:34:46.434 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:46.434 192.168.100.9' 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:46.435 192.168.100.9' 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3752790 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3752790 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3752790 ']' 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:46.435 21:21:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:46.435 [2024-07-13 21:21:37.253113] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:46.435 [2024-07-13 21:21:37.253168] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:46.435 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.694 [2024-07-13 21:21:37.326357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:46.694 [2024-07-13 21:21:37.364927] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:46.694 [2024-07-13 21:21:37.364973] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:46.694 [2024-07-13 21:21:37.364982] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:46.694 [2024-07-13 21:21:37.364990] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:46.694 [2024-07-13 21:21:37.364998] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:46.694 [2024-07-13 21:21:37.365104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:46.694 [2024-07-13 21:21:37.365186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:46.694 [2024-07-13 21:21:37.365187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.261 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:47.261 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:47.261 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:47.261 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.261 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:47.261 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.261 21:21:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:47.261 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.261 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:47.261 [2024-07-13 21:21:38.130895] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x81b420/0x81f910) succeed. 00:34:47.261 [2024-07-13 21:21:38.141108] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x81c9c0/0x860fa0) succeed. 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:47.520 Malloc0 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:47.520 [2024-07-13 21:21:38.294337] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:47.520 { 00:34:47.520 "params": { 00:34:47.520 "name": "Nvme$subsystem", 00:34:47.520 "trtype": "$TEST_TRANSPORT", 00:34:47.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.520 "adrfam": "ipv4", 00:34:47.520 "trsvcid": "$NVMF_PORT", 00:34:47.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.520 "hdgst": ${hdgst:-false}, 00:34:47.520 "ddgst": ${ddgst:-false} 00:34:47.520 }, 00:34:47.520 "method": "bdev_nvme_attach_controller" 00:34:47.520 } 00:34:47.520 EOF 00:34:47.520 )") 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:47.520 21:21:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:47.520 "params": { 00:34:47.520 "name": "Nvme1", 00:34:47.520 "trtype": "rdma", 00:34:47.520 "traddr": "192.168.100.8", 00:34:47.520 "adrfam": "ipv4", 00:34:47.520 "trsvcid": "4420", 00:34:47.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:47.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:47.520 "hdgst": false, 00:34:47.520 "ddgst": false 00:34:47.520 }, 00:34:47.520 "method": "bdev_nvme_attach_controller" 00:34:47.520 }' 00:34:47.520 [2024-07-13 21:21:38.344118] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:47.520 [2024-07-13 21:21:38.344164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753068 ] 00:34:47.520 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.780 [2024-07-13 21:21:38.414957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.780 [2024-07-13 21:21:38.453604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.780 Running I/O for 1 seconds... 00:34:49.157 00:34:49.157 Latency(us) 00:34:49.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.157 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:49.157 Verification LBA range: start 0x0 length 0x4000 00:34:49.157 Nvme1n1 : 1.00 18086.60 70.65 0.00 0.00 7039.11 2778.73 11744.05 00:34:49.157 =================================================================================================================== 00:34:49.157 Total : 18086.60 70.65 0.00 0.00 7039.11 2778.73 11744.05 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3753332 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:49.157 { 00:34:49.157 "params": { 00:34:49.157 "name": "Nvme$subsystem", 00:34:49.157 "trtype": "$TEST_TRANSPORT", 00:34:49.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:49.157 "adrfam": "ipv4", 00:34:49.157 "trsvcid": "$NVMF_PORT", 00:34:49.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:49.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:49.157 "hdgst": ${hdgst:-false}, 00:34:49.157 "ddgst": ${ddgst:-false} 00:34:49.157 }, 00:34:49.157 "method": "bdev_nvme_attach_controller" 00:34:49.157 } 00:34:49.157 EOF 00:34:49.157 )") 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:49.157 21:21:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:49.157 "params": { 00:34:49.157 "name": "Nvme1", 00:34:49.157 "trtype": "rdma", 00:34:49.157 "traddr": "192.168.100.8", 00:34:49.157 "adrfam": "ipv4", 00:34:49.157 "trsvcid": "4420", 00:34:49.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:49.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:49.157 "hdgst": false, 00:34:49.157 "ddgst": false 00:34:49.157 }, 00:34:49.157 "method": "bdev_nvme_attach_controller" 00:34:49.157 }' 00:34:49.157 [2024-07-13 21:21:39.873553] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:49.157 [2024-07-13 21:21:39.873605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753332 ] 00:34:49.157 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.157 [2024-07-13 21:21:39.946663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.157 [2024-07-13 21:21:39.981948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.417 Running I/O for 15 seconds... 00:34:51.949 21:21:42 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3752790 00:34:51.949 21:21:42 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:53.328 [2024-07-13 21:21:43.860490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.328 [2024-07-13 21:21:43.860873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182f00 00:34:53.328 [2024-07-13 21:21:43.860883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.860894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.860914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.860925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.860934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.860945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.860954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.860964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.860976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.860987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.860996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182f00 00:34:53.329 [2024-07-13 21:21:43.861648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.329 [2024-07-13 21:21:43.861672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.329 [2024-07-13 21:21:43.861692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.329 [2024-07-13 21:21:43.861711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.329 [2024-07-13 21:21:43.861731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.329 [2024-07-13 21:21:43.861752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.329 [2024-07-13 21:21:43.861771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.329 [2024-07-13 21:21:43.861791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.329 [2024-07-13 21:21:43.861810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.329 [2024-07-13 21:21:43.861820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.329 [2024-07-13 21:21:43.861829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.861840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.861849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.861860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.861869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.861879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.861888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.861899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.861909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.861919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.861928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.861938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.861947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.861957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.861966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.861976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.861985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.861995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.330 [2024-07-13 21:21:43.862698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.330 [2024-07-13 21:21:43.862709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.862987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.862996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.863251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.331 [2024-07-13 21:21:43.863260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84748000 sqhd:52d0 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.865165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.331 [2024-07-13 21:21:43.865180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.331 [2024-07-13 21:21:43.865189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126584 len:8 PRP1 0x0 PRP2 0x0 00:34:53.331 [2024-07-13 21:21:43.865199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.331 [2024-07-13 21:21:43.865239] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:34:53.331 [2024-07-13 21:21:43.867945] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:53.331 [2024-07-13 21:21:43.882762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:53.331 [2024-07-13 21:21:43.885440] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:53.331 [2024-07-13 21:21:43.885461] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:53.331 [2024-07-13 21:21:43.885470] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:34:54.267 [2024-07-13 21:21:44.889708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:54.267 [2024-07-13 21:21:44.889767] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:54.267 [2024-07-13 21:21:44.890005] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:54.267 [2024-07-13 21:21:44.890026] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:54.267 [2024-07-13 21:21:44.890041] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:54.267 [2024-07-13 21:21:44.891816] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:54.267 [2024-07-13 21:21:44.893792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:54.267 [2024-07-13 21:21:44.904628] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:54.267 [2024-07-13 21:21:44.907974] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:54.267 [2024-07-13 21:21:44.907994] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:54.267 [2024-07-13 21:21:44.908002] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:34:55.205 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3752790 Killed "${NVMF_APP[@]}" "$@" 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3754385 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3754385 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3754385 ']' 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:55.205 21:21:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:55.205 [2024-07-13 21:21:45.901145] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:55.205 [2024-07-13 21:21:45.901200] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.205 [2024-07-13 21:21:45.912058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:55.205 [2024-07-13 21:21:45.912085] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:55.205 [2024-07-13 21:21:45.912253] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:55.206 [2024-07-13 21:21:45.912265] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:55.206 [2024-07-13 21:21:45.912275] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:55.206 [2024-07-13 21:21:45.913971] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:55.206 [2024-07-13 21:21:45.914885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:55.206 [2024-07-13 21:21:45.926783] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:55.206 [2024-07-13 21:21:45.929440] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:55.206 [2024-07-13 21:21:45.929460] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:55.206 [2024-07-13 21:21:45.929468] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:34:55.206 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.206 [2024-07-13 21:21:45.973545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:55.206 [2024-07-13 21:21:46.012072] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.206 [2024-07-13 21:21:46.012121] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.206 [2024-07-13 21:21:46.012131] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.206 [2024-07-13 21:21:46.012139] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.206 [2024-07-13 21:21:46.012146] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.206 [2024-07-13 21:21:46.012192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:55.206 [2024-07-13 21:21:46.012280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:55.206 [2024-07-13 21:21:46.012282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:56.142 [2024-07-13 21:21:46.783584] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe33420/0xe37910) succeed. 00:34:56.142 [2024-07-13 21:21:46.793721] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe349c0/0xe78fa0) succeed. 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:56.142 Malloc0 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:56.142 [2024-07-13 21:21:46.933573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:56.142 [2024-07-13 21:21:46.933604] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:56.142 [2024-07-13 21:21:46.933778] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:56.142 [2024-07-13 21:21:46.933791] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:56.142 [2024-07-13 21:21:46.933803] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.142 [2024-07-13 21:21:46.934612] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:56.142 [2024-07-13 21:21:46.936530] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:56.142 [2024-07-13 21:21:46.937879] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.142 21:21:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3753332 00:34:56.142 [2024-07-13 21:21:46.947560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:56.142 [2024-07-13 21:21:46.991969] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:06.120 00:35:06.120 Latency(us) 00:35:06.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.120 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:06.120 Verification LBA range: start 0x0 length 0x4000 00:35:06.120 Nvme1n1 : 15.01 13096.14 51.16 10646.19 0.00 5371.60 445.64 1033476.51 00:35:06.120 =================================================================================================================== 00:35:06.120 Total : 13096.14 51.16 10646.19 0.00 5371.60 445.64 1033476.51 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:06.120 rmmod nvme_rdma 00:35:06.120 rmmod nvme_fabrics 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3754385 ']' 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3754385 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3754385 ']' 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3754385 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3754385 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3754385' 00:35:06.120 killing process with pid 3754385 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3754385 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3754385 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:06.120 00:35:06.120 real 0m25.215s 00:35:06.120 user 1m4.135s 00:35:06.120 sys 0m6.243s 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:06.120 21:21:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:06.120 ************************************ 00:35:06.120 END TEST nvmf_bdevperf 00:35:06.120 ************************************ 00:35:06.120 21:21:55 nvmf_rdma -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:35:06.120 21:21:55 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:06.120 21:21:55 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:06.120 21:21:55 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:06.120 ************************************ 00:35:06.120 START TEST nvmf_target_disconnect 00:35:06.120 ************************************ 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:35:06.120 * Looking for test storage... 00:35:06.120 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.120 21:21:55 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:06.121 21:21:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:35:06.121 21:21:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:11.484 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:35:11.485 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:35:11.485 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:35:11.485 Found net devices under 0000:d9:00.0: mlx_0_0 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:35:11.485 Found net devices under 0000:d9:00.1: mlx_0_1 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:35:11.485 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:11.485 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:35:11.485 altname enp217s0f0np0 00:35:11.485 altname ens818f0np0 00:35:11.485 inet 192.168.100.8/24 scope global mlx_0_0 00:35:11.485 valid_lft forever preferred_lft forever 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:35:11.485 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:11.485 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:35:11.485 altname enp217s0f1np1 00:35:11.485 altname ens818f1np1 00:35:11.485 inet 192.168.100.9/24 scope global mlx_0_1 00:35:11.485 valid_lft forever preferred_lft forever 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:11.485 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:11.486 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:11.486 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:11.486 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:11.486 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:11.745 192.168.100.9' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:11.745 192.168.100.9' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:11.745 192.168.100.9' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:11.745 ************************************ 00:35:11.745 START TEST nvmf_target_disconnect_tc1 00:35:11.745 ************************************ 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:11.745 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:35:11.746 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:11.746 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:35:11.746 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:35:11.746 21:22:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:11.746 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.746 [2024-07-13 21:22:02.614763] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:11.746 [2024-07-13 21:22:02.614808] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:11.746 [2024-07-13 21:22:02.614823] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:35:13.146 [2024-07-13 21:22:03.618624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:35:13.146 [2024-07-13 21:22:03.618648] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:13.146 [2024-07-13 21:22:03.618659] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:35:13.146 [2024-07-13 21:22:03.618681] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:13.146 [2024-07-13 21:22:03.618691] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:35:13.146 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:35:13.146 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:13.146 Initializing NVMe Controllers 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:13.146 00:35:13.146 real 0m1.131s 00:35:13.146 user 0m0.835s 00:35:13.146 sys 0m0.285s 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:13.146 ************************************ 00:35:13.146 END TEST nvmf_target_disconnect_tc1 00:35:13.146 ************************************ 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:13.146 ************************************ 00:35:13.146 START TEST nvmf_target_disconnect_tc2 00:35:13.146 ************************************ 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3759458 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3759458 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3759458 ']' 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:13.146 21:22:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:13.146 [2024-07-13 21:22:03.764496] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:13.146 [2024-07-13 21:22:03.764539] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.146 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.146 [2024-07-13 21:22:03.852021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:13.146 [2024-07-13 21:22:03.891874] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.146 [2024-07-13 21:22:03.891918] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.146 [2024-07-13 21:22:03.891927] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.146 [2024-07-13 21:22:03.891935] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.146 [2024-07-13 21:22:03.891942] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.146 [2024-07-13 21:22:03.892069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:35:13.146 [2024-07-13 21:22:03.892179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:35:13.146 [2024-07-13 21:22:03.892292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:35:13.146 [2024-07-13 21:22:03.892293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:35:13.714 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:13.714 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:35:13.714 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:13.715 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.715 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:13.715 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:13.715 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:13.715 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.715 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 Malloc0 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 [2024-07-13 21:22:04.638906] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2236e40/0x2243340) succeed. 00:35:13.974 [2024-07-13 21:22:04.649590] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2238480/0x22c3380) succeed. 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 [2024-07-13 21:22:04.786780] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3759520 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:13.974 21:22:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:13.974 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.508 21:22:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3759458 00:35:16.508 21:22:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Write completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 Read completed with error (sct=0, sc=8) 00:35:17.445 starting I/O failed 00:35:17.445 [2024-07-13 21:22:07.991279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:18.013 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3759458 Killed "${NVMF_APP[@]}" "$@" 00:35:18.013 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:35:18.013 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3760289 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3760289 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3760289 ']' 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:18.014 21:22:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:18.014 [2024-07-13 21:22:08.864034] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:18.014 [2024-07-13 21:22:08.864092] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.014 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.273 [2024-07-13 21:22:08.952104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:18.273 [2024-07-13 21:22:08.990047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.273 [2024-07-13 21:22:08.990089] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.273 [2024-07-13 21:22:08.990099] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.273 [2024-07-13 21:22:08.990108] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.273 [2024-07-13 21:22:08.990116] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.273 [2024-07-13 21:22:08.990235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:35:18.273 [2024-07-13 21:22:08.990345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:35:18.273 [2024-07-13 21:22:08.990454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:35:18.273 [2024-07-13 21:22:08.990456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Write completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 Read completed with error (sct=0, sc=8) 00:35:18.273 starting I/O failed 00:35:18.273 [2024-07-13 21:22:08.996448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.841 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:18.841 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:35:18.841 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:18.841 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:18.841 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:18.841 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.841 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:18.841 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.841 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:19.100 Malloc0 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:19.100 [2024-07-13 21:22:09.760304] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2345e40/0x2352340) succeed. 00:35:19.100 [2024-07-13 21:22:09.771148] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2347480/0x23d2380) succeed. 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:19.100 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.101 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:19.101 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.101 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:19.101 [2024-07-13 21:22:09.908982] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:19.101 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.101 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:35:19.101 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.101 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:19.101 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.101 21:22:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3759520 00:35:19.360 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Write completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 Read completed with error (sct=0, sc=8) 00:35:19.361 starting I/O failed 00:35:19.361 [2024-07-13 21:22:10.001669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 [2024-07-13 21:22:10.007301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.007365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.007387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.007398] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.007409] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.017399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.027118] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.027166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.027186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.027196] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.027207] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.037614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.047236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.047285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.047303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.047312] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.047322] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.057830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.067203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.067251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.067271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.067287] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.067298] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.077702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.087368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.087411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.087429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.087438] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.087447] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.097973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.107429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.107470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.107488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.107498] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.107508] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.117950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.127474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.127519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.127536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.127546] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.127555] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.138024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.147400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.147444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.147461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.147470] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.147479] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.158049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.167573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.167611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.167628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.167637] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.167646] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.178056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.187686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.187729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.361 [2024-07-13 21:22:10.187747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.361 [2024-07-13 21:22:10.187757] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.361 [2024-07-13 21:22:10.187766] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.361 [2024-07-13 21:22:10.198135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.361 qpair failed and we were unable to recover it. 00:35:19.361 [2024-07-13 21:22:10.207622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.361 [2024-07-13 21:22:10.207665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.362 [2024-07-13 21:22:10.207683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.362 [2024-07-13 21:22:10.207693] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.362 [2024-07-13 21:22:10.207701] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.362 [2024-07-13 21:22:10.218132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.362 qpair failed and we were unable to recover it. 00:35:19.362 [2024-07-13 21:22:10.227742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.362 [2024-07-13 21:22:10.227780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.362 [2024-07-13 21:22:10.227797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.362 [2024-07-13 21:22:10.227807] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.362 [2024-07-13 21:22:10.227815] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.362 [2024-07-13 21:22:10.238074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.362 qpair failed and we were unable to recover it. 00:35:19.362 [2024-07-13 21:22:10.247852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.362 [2024-07-13 21:22:10.247895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.362 [2024-07-13 21:22:10.247914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.362 [2024-07-13 21:22:10.247924] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.362 [2024-07-13 21:22:10.247933] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.621 [2024-07-13 21:22:10.258402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.621 qpair failed and we were unable to recover it. 00:35:19.621 [2024-07-13 21:22:10.267729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.621 [2024-07-13 21:22:10.267765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.621 [2024-07-13 21:22:10.267781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.621 [2024-07-13 21:22:10.267791] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.621 [2024-07-13 21:22:10.267800] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.278316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.287941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.287980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.287997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.288007] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.288023] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.298365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.307795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.307833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.307850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.307864] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.307877] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.318539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.328069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.328108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.328125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.328134] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.328146] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.338479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.347875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.347916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.347932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.347942] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.347951] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.358625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.368180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.368219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.368236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.368245] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.368254] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.378637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.387867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.387907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.387923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.387933] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.387942] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.398784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.408320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.408361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.408378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.408388] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.408396] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.418726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.428340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.428384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.428401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.428411] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.428419] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.438759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.448436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.448472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.448489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.448498] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.448507] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.458985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.468218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.468260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.468276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.468285] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.468294] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.478747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.488481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.488520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.488536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.488545] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.488554] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.622 [2024-07-13 21:22:10.499086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.622 qpair failed and we were unable to recover it. 00:35:19.622 [2024-07-13 21:22:10.508401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.622 [2024-07-13 21:22:10.508441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.622 [2024-07-13 21:22:10.508458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.622 [2024-07-13 21:22:10.508471] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.622 [2024-07-13 21:22:10.508480] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.882 [2024-07-13 21:22:10.518970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.882 qpair failed and we were unable to recover it. 00:35:19.882 [2024-07-13 21:22:10.528721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.882 [2024-07-13 21:22:10.528757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.882 [2024-07-13 21:22:10.528774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.882 [2024-07-13 21:22:10.528784] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.882 [2024-07-13 21:22:10.528792] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.882 [2024-07-13 21:22:10.538979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.882 qpair failed and we were unable to recover it. 00:35:19.882 [2024-07-13 21:22:10.548553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.882 [2024-07-13 21:22:10.548592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.882 [2024-07-13 21:22:10.548609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.882 [2024-07-13 21:22:10.548618] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.882 [2024-07-13 21:22:10.548627] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.882 [2024-07-13 21:22:10.559121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.882 qpair failed and we were unable to recover it. 00:35:19.882 [2024-07-13 21:22:10.568706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.882 [2024-07-13 21:22:10.568748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.882 [2024-07-13 21:22:10.568764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.882 [2024-07-13 21:22:10.568774] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.882 [2024-07-13 21:22:10.568783] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.882 [2024-07-13 21:22:10.579036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.882 qpair failed and we were unable to recover it. 00:35:19.882 [2024-07-13 21:22:10.588692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.882 [2024-07-13 21:22:10.588737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.882 [2024-07-13 21:22:10.588753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.882 [2024-07-13 21:22:10.588763] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.882 [2024-07-13 21:22:10.588772] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.882 [2024-07-13 21:22:10.599206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.882 qpair failed and we were unable to recover it. 00:35:19.882 [2024-07-13 21:22:10.608821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.883 [2024-07-13 21:22:10.608860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.883 [2024-07-13 21:22:10.608877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.883 [2024-07-13 21:22:10.608887] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.883 [2024-07-13 21:22:10.608897] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.883 [2024-07-13 21:22:10.619349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.883 qpair failed and we were unable to recover it. 00:35:19.883 [2024-07-13 21:22:10.628855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.883 [2024-07-13 21:22:10.628892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.883 [2024-07-13 21:22:10.628909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.883 [2024-07-13 21:22:10.628918] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.883 [2024-07-13 21:22:10.628927] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.883 [2024-07-13 21:22:10.639365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.883 qpair failed and we were unable to recover it. 00:35:19.883 [2024-07-13 21:22:10.648965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.883 [2024-07-13 21:22:10.649016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.883 [2024-07-13 21:22:10.649033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.883 [2024-07-13 21:22:10.649043] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.883 [2024-07-13 21:22:10.649052] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.883 [2024-07-13 21:22:10.659321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.883 qpair failed and we were unable to recover it. 00:35:19.883 [2024-07-13 21:22:10.668824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.883 [2024-07-13 21:22:10.668865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.883 [2024-07-13 21:22:10.668881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.883 [2024-07-13 21:22:10.668891] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.883 [2024-07-13 21:22:10.668899] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.883 [2024-07-13 21:22:10.679490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.883 qpair failed and we were unable to recover it. 00:35:19.883 [2024-07-13 21:22:10.689143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.883 [2024-07-13 21:22:10.689180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.883 [2024-07-13 21:22:10.689203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.883 [2024-07-13 21:22:10.689212] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.883 [2024-07-13 21:22:10.689221] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.883 [2024-07-13 21:22:10.699550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.883 qpair failed and we were unable to recover it. 00:35:19.883 [2024-07-13 21:22:10.709322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.883 [2024-07-13 21:22:10.709362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.883 [2024-07-13 21:22:10.709379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.883 [2024-07-13 21:22:10.709389] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.883 [2024-07-13 21:22:10.709398] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.883 [2024-07-13 21:22:10.719593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.883 qpair failed and we were unable to recover it. 00:35:19.883 [2024-07-13 21:22:10.729001] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.883 [2024-07-13 21:22:10.729043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.883 [2024-07-13 21:22:10.729060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.883 [2024-07-13 21:22:10.729069] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.883 [2024-07-13 21:22:10.729078] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.883 [2024-07-13 21:22:10.739750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.883 qpair failed and we were unable to recover it. 00:35:19.883 [2024-07-13 21:22:10.749381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.883 [2024-07-13 21:22:10.749421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.883 [2024-07-13 21:22:10.749438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.883 [2024-07-13 21:22:10.749448] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.883 [2024-07-13 21:22:10.749456] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:19.883 [2024-07-13 21:22:10.759719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:19.883 qpair failed and we were unable to recover it. 00:35:19.883 [2024-07-13 21:22:10.769328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.883 [2024-07-13 21:22:10.769367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.883 [2024-07-13 21:22:10.769384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.883 [2024-07-13 21:22:10.769393] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.883 [2024-07-13 21:22:10.769406] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.143 [2024-07-13 21:22:10.779636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.143 qpair failed and we were unable to recover it. 00:35:20.143 [2024-07-13 21:22:10.789276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.143 [2024-07-13 21:22:10.789317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.143 [2024-07-13 21:22:10.789333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.143 [2024-07-13 21:22:10.789342] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.143 [2024-07-13 21:22:10.789351] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.143 [2024-07-13 21:22:10.799760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.143 qpair failed and we were unable to recover it. 00:35:20.143 [2024-07-13 21:22:10.809586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.143 [2024-07-13 21:22:10.809626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.143 [2024-07-13 21:22:10.809643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.143 [2024-07-13 21:22:10.809652] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.143 [2024-07-13 21:22:10.809661] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.143 [2024-07-13 21:22:10.819809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.143 qpair failed and we were unable to recover it. 00:35:20.143 [2024-07-13 21:22:10.829628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.143 [2024-07-13 21:22:10.829665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.143 [2024-07-13 21:22:10.829681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.143 [2024-07-13 21:22:10.829691] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.143 [2024-07-13 21:22:10.829699] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.143 [2024-07-13 21:22:10.839905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.143 qpair failed and we were unable to recover it. 00:35:20.143 [2024-07-13 21:22:10.849450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.143 [2024-07-13 21:22:10.849487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.143 [2024-07-13 21:22:10.849503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.143 [2024-07-13 21:22:10.849513] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.143 [2024-07-13 21:22:10.849522] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.143 [2024-07-13 21:22:10.859956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.143 qpair failed and we were unable to recover it. 00:35:20.143 [2024-07-13 21:22:10.869410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.143 [2024-07-13 21:22:10.869448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.143 [2024-07-13 21:22:10.869465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.144 [2024-07-13 21:22:10.869474] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.144 [2024-07-13 21:22:10.869483] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.144 [2024-07-13 21:22:10.879991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.144 qpair failed and we were unable to recover it. 00:35:20.144 [2024-07-13 21:22:10.889589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.144 [2024-07-13 21:22:10.889627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.144 [2024-07-13 21:22:10.889643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.144 [2024-07-13 21:22:10.889652] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.144 [2024-07-13 21:22:10.889661] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.144 [2024-07-13 21:22:10.900056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.144 qpair failed and we were unable to recover it. 00:35:20.144 [2024-07-13 21:22:10.909607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.144 [2024-07-13 21:22:10.909647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.144 [2024-07-13 21:22:10.909663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.144 [2024-07-13 21:22:10.909673] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.144 [2024-07-13 21:22:10.909681] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.144 [2024-07-13 21:22:10.920143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.144 qpair failed and we were unable to recover it. 00:35:20.144 [2024-07-13 21:22:10.929801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.144 [2024-07-13 21:22:10.929841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.144 [2024-07-13 21:22:10.929857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.144 [2024-07-13 21:22:10.929866] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.144 [2024-07-13 21:22:10.929875] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.144 [2024-07-13 21:22:10.940523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.144 qpair failed and we were unable to recover it. 00:35:20.144 [2024-07-13 21:22:10.949746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.144 [2024-07-13 21:22:10.949787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.144 [2024-07-13 21:22:10.949804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.144 [2024-07-13 21:22:10.949816] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.144 [2024-07-13 21:22:10.949826] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.144 [2024-07-13 21:22:10.960328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.144 qpair failed and we were unable to recover it. 00:35:20.144 [2024-07-13 21:22:10.969938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.144 [2024-07-13 21:22:10.969983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.144 [2024-07-13 21:22:10.970000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.144 [2024-07-13 21:22:10.970009] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.144 [2024-07-13 21:22:10.970025] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.144 [2024-07-13 21:22:10.980405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.144 qpair failed and we were unable to recover it. 00:35:20.144 [2024-07-13 21:22:10.989802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.144 [2024-07-13 21:22:10.989843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.144 [2024-07-13 21:22:10.989859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.144 [2024-07-13 21:22:10.989869] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.144 [2024-07-13 21:22:10.989877] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.144 [2024-07-13 21:22:11.000522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.144 qpair failed and we were unable to recover it. 00:35:20.144 [2024-07-13 21:22:11.010099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.144 [2024-07-13 21:22:11.010138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.144 [2024-07-13 21:22:11.010156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.144 [2024-07-13 21:22:11.010166] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.144 [2024-07-13 21:22:11.010175] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.144 [2024-07-13 21:22:11.020535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.144 qpair failed and we were unable to recover it. 00:35:20.144 [2024-07-13 21:22:11.029906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.144 [2024-07-13 21:22:11.029945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.144 [2024-07-13 21:22:11.029962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.144 [2024-07-13 21:22:11.029972] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.144 [2024-07-13 21:22:11.029981] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.404 [2024-07-13 21:22:11.040377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.404 qpair failed and we were unable to recover it. 00:35:20.404 [2024-07-13 21:22:11.050123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.404 [2024-07-13 21:22:11.050164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.404 [2024-07-13 21:22:11.050182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.404 [2024-07-13 21:22:11.050191] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.404 [2024-07-13 21:22:11.050200] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.404 [2024-07-13 21:22:11.060488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.404 qpair failed and we were unable to recover it. 00:35:20.404 [2024-07-13 21:22:11.070049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.404 [2024-07-13 21:22:11.070093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.404 [2024-07-13 21:22:11.070110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.404 [2024-07-13 21:22:11.070120] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.404 [2024-07-13 21:22:11.070129] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.404 [2024-07-13 21:22:11.080488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.404 qpair failed and we were unable to recover it. 00:35:20.404 [2024-07-13 21:22:11.090053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.404 [2024-07-13 21:22:11.090093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.404 [2024-07-13 21:22:11.090110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.404 [2024-07-13 21:22:11.090120] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.404 [2024-07-13 21:22:11.090129] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.404 [2024-07-13 21:22:11.100515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.404 qpair failed and we were unable to recover it. 00:35:20.404 [2024-07-13 21:22:11.110313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.404 [2024-07-13 21:22:11.110351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.404 [2024-07-13 21:22:11.110368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.404 [2024-07-13 21:22:11.110378] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.404 [2024-07-13 21:22:11.110387] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.404 [2024-07-13 21:22:11.120480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.404 qpair failed and we were unable to recover it. 00:35:20.404 [2024-07-13 21:22:11.130311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.404 [2024-07-13 21:22:11.130351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.404 [2024-07-13 21:22:11.130371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.404 [2024-07-13 21:22:11.130380] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.404 [2024-07-13 21:22:11.130389] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.404 [2024-07-13 21:22:11.140611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.404 qpair failed and we were unable to recover it. 00:35:20.404 [2024-07-13 21:22:11.150334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.404 [2024-07-13 21:22:11.150368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.404 [2024-07-13 21:22:11.150385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.404 [2024-07-13 21:22:11.150395] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.404 [2024-07-13 21:22:11.150404] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.404 [2024-07-13 21:22:11.160781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.404 qpair failed and we were unable to recover it. 00:35:20.404 [2024-07-13 21:22:11.170465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.405 [2024-07-13 21:22:11.170504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.405 [2024-07-13 21:22:11.170521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.405 [2024-07-13 21:22:11.170530] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.405 [2024-07-13 21:22:11.170539] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.405 [2024-07-13 21:22:11.180880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.405 qpair failed and we were unable to recover it. 00:35:20.405 [2024-07-13 21:22:11.190470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.405 [2024-07-13 21:22:11.190510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.405 [2024-07-13 21:22:11.190526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.405 [2024-07-13 21:22:11.190536] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.405 [2024-07-13 21:22:11.190545] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.405 [2024-07-13 21:22:11.200917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.405 qpair failed and we were unable to recover it. 00:35:20.405 [2024-07-13 21:22:11.210581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.405 [2024-07-13 21:22:11.210623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.405 [2024-07-13 21:22:11.210640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.405 [2024-07-13 21:22:11.210649] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.405 [2024-07-13 21:22:11.210661] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.405 [2024-07-13 21:22:11.220996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.405 qpair failed and we were unable to recover it. 00:35:20.405 [2024-07-13 21:22:11.230556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.405 [2024-07-13 21:22:11.230601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.405 [2024-07-13 21:22:11.230617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.405 [2024-07-13 21:22:11.230627] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.405 [2024-07-13 21:22:11.230636] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.405 [2024-07-13 21:22:11.240860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.405 qpair failed and we were unable to recover it. 00:35:20.405 [2024-07-13 21:22:11.250629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.405 [2024-07-13 21:22:11.250669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.405 [2024-07-13 21:22:11.250686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.405 [2024-07-13 21:22:11.250696] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.405 [2024-07-13 21:22:11.250705] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.405 [2024-07-13 21:22:11.260970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.405 qpair failed and we were unable to recover it. 00:35:20.405 [2024-07-13 21:22:11.270790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.405 [2024-07-13 21:22:11.270832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.405 [2024-07-13 21:22:11.270848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.405 [2024-07-13 21:22:11.270858] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.405 [2024-07-13 21:22:11.270867] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.405 [2024-07-13 21:22:11.281227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.405 qpair failed and we were unable to recover it. 00:35:20.405 [2024-07-13 21:22:11.290659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.405 [2024-07-13 21:22:11.290701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.405 [2024-07-13 21:22:11.290719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.405 [2024-07-13 21:22:11.290730] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.405 [2024-07-13 21:22:11.290740] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.665 [2024-07-13 21:22:11.301142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.665 qpair failed and we were unable to recover it. 00:35:20.665 [2024-07-13 21:22:11.310874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.665 [2024-07-13 21:22:11.310912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.665 [2024-07-13 21:22:11.310930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.665 [2024-07-13 21:22:11.310939] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.665 [2024-07-13 21:22:11.310948] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.665 [2024-07-13 21:22:11.321181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.665 qpair failed and we were unable to recover it. 00:35:20.665 [2024-07-13 21:22:11.330803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.665 [2024-07-13 21:22:11.330842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.665 [2024-07-13 21:22:11.330860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.665 [2024-07-13 21:22:11.330869] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.665 [2024-07-13 21:22:11.330878] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.665 [2024-07-13 21:22:11.341381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.665 qpair failed and we were unable to recover it. 00:35:20.665 [2024-07-13 21:22:11.350953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.665 [2024-07-13 21:22:11.350993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.665 [2024-07-13 21:22:11.351015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.665 [2024-07-13 21:22:11.351025] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.665 [2024-07-13 21:22:11.351036] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.665 [2024-07-13 21:22:11.361386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.665 qpair failed and we were unable to recover it. 00:35:20.665 [2024-07-13 21:22:11.370935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.665 [2024-07-13 21:22:11.370980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.665 [2024-07-13 21:22:11.370997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.665 [2024-07-13 21:22:11.371006] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.665 [2024-07-13 21:22:11.371020] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.665 [2024-07-13 21:22:11.381568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.665 qpair failed and we were unable to recover it. 00:35:20.665 [2024-07-13 21:22:11.391141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.665 [2024-07-13 21:22:11.391180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.665 [2024-07-13 21:22:11.391197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.665 [2024-07-13 21:22:11.391210] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.665 [2024-07-13 21:22:11.391219] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.665 [2024-07-13 21:22:11.401433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.665 qpair failed and we were unable to recover it. 00:35:20.665 [2024-07-13 21:22:11.411115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.665 [2024-07-13 21:22:11.411155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.665 [2024-07-13 21:22:11.411172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.665 [2024-07-13 21:22:11.411181] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.665 [2024-07-13 21:22:11.411191] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.665 [2024-07-13 21:22:11.421588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.665 qpair failed and we were unable to recover it. 00:35:20.665 [2024-07-13 21:22:11.431124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.665 [2024-07-13 21:22:11.431161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.666 [2024-07-13 21:22:11.431178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.666 [2024-07-13 21:22:11.431187] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.666 [2024-07-13 21:22:11.431196] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.666 [2024-07-13 21:22:11.441642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.666 qpair failed and we were unable to recover it. 00:35:20.666 [2024-07-13 21:22:11.451134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.666 [2024-07-13 21:22:11.451181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.666 [2024-07-13 21:22:11.451198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.666 [2024-07-13 21:22:11.451208] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.666 [2024-07-13 21:22:11.451218] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.666 [2024-07-13 21:22:11.461614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.666 qpair failed and we were unable to recover it. 00:35:20.666 [2024-07-13 21:22:11.471211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.666 [2024-07-13 21:22:11.471253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.666 [2024-07-13 21:22:11.471270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.666 [2024-07-13 21:22:11.471280] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.666 [2024-07-13 21:22:11.471289] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.666 [2024-07-13 21:22:11.481596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.666 qpair failed and we were unable to recover it. 00:35:20.666 [2024-07-13 21:22:11.491291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.666 [2024-07-13 21:22:11.491328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.666 [2024-07-13 21:22:11.491344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.666 [2024-07-13 21:22:11.491354] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.666 [2024-07-13 21:22:11.491362] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.666 [2024-07-13 21:22:11.501784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.666 qpair failed and we were unable to recover it. 00:35:20.666 [2024-07-13 21:22:11.511318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.666 [2024-07-13 21:22:11.511357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.666 [2024-07-13 21:22:11.511374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.666 [2024-07-13 21:22:11.511383] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.666 [2024-07-13 21:22:11.511392] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.666 [2024-07-13 21:22:11.521797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.666 qpair failed and we were unable to recover it. 00:35:20.666 [2024-07-13 21:22:11.531424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.666 [2024-07-13 21:22:11.531465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.666 [2024-07-13 21:22:11.531481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.666 [2024-07-13 21:22:11.531491] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.666 [2024-07-13 21:22:11.531500] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.666 [2024-07-13 21:22:11.541933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.666 qpair failed and we were unable to recover it. 00:35:20.666 [2024-07-13 21:22:11.551431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.666 [2024-07-13 21:22:11.551467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.666 [2024-07-13 21:22:11.551484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.666 [2024-07-13 21:22:11.551494] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.666 [2024-07-13 21:22:11.551502] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.926 [2024-07-13 21:22:11.561937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.926 qpair failed and we were unable to recover it. 00:35:20.926 [2024-07-13 21:22:11.571500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.926 [2024-07-13 21:22:11.571542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.926 [2024-07-13 21:22:11.571561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.926 [2024-07-13 21:22:11.571571] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.926 [2024-07-13 21:22:11.571580] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.926 [2024-07-13 21:22:11.582473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.926 qpair failed and we were unable to recover it. 00:35:20.926 [2024-07-13 21:22:11.591596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.926 [2024-07-13 21:22:11.591635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.926 [2024-07-13 21:22:11.591651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.926 [2024-07-13 21:22:11.591661] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.926 [2024-07-13 21:22:11.591670] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.926 [2024-07-13 21:22:11.601833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.926 qpair failed and we were unable to recover it. 00:35:20.926 [2024-07-13 21:22:11.611603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.611648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.611665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.611674] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.611684] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.622249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.631652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.631692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.631709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.631719] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.631728] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.642143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.651685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.651723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.651739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.651749] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.651761] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.662056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.671692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.671730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.671747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.671756] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.671766] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.682191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.691900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.691941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.691957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.691967] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.691975] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.702512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.712051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.712092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.712109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.712119] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.712128] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.722345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.732016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.732063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.732080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.732089] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.732098] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.742541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.752036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.752077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.752093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.752103] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.752112] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.762452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.772157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.772203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.772220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.772229] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.772239] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.782549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.792206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.792245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.792262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.792271] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.792280] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:20.927 [2024-07-13 21:22:11.802596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:20.927 qpair failed and we were unable to recover it. 00:35:20.927 [2024-07-13 21:22:11.812303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.927 [2024-07-13 21:22:11.812346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.927 [2024-07-13 21:22:11.812364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.927 [2024-07-13 21:22:11.812373] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.927 [2024-07-13 21:22:11.812382] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.187 [2024-07-13 21:22:11.822880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.187 qpair failed and we were unable to recover it. 00:35:21.187 [2024-07-13 21:22:11.832331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.187 [2024-07-13 21:22:11.832370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.187 [2024-07-13 21:22:11.832387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.187 [2024-07-13 21:22:11.832399] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.187 [2024-07-13 21:22:11.832408] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.187 [2024-07-13 21:22:11.842624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.187 qpair failed and we were unable to recover it. 00:35:21.187 [2024-07-13 21:22:11.852519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.187 [2024-07-13 21:22:11.852560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.187 [2024-07-13 21:22:11.852576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.187 [2024-07-13 21:22:11.852586] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.187 [2024-07-13 21:22:11.852595] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.187 [2024-07-13 21:22:11.862841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.187 qpair failed and we were unable to recover it. 00:35:21.187 [2024-07-13 21:22:11.872409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.187 [2024-07-13 21:22:11.872453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.187 [2024-07-13 21:22:11.872470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.187 [2024-07-13 21:22:11.872480] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.187 [2024-07-13 21:22:11.872489] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.187 [2024-07-13 21:22:11.882748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.187 qpair failed and we were unable to recover it. 00:35:21.187 [2024-07-13 21:22:11.892527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.187 [2024-07-13 21:22:11.892570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.187 [2024-07-13 21:22:11.892586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.187 [2024-07-13 21:22:11.892596] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:11.892605] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.188 [2024-07-13 21:22:11.903032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.188 qpair failed and we were unable to recover it. 00:35:21.188 [2024-07-13 21:22:11.912540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.188 [2024-07-13 21:22:11.912580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.188 [2024-07-13 21:22:11.912597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.188 [2024-07-13 21:22:11.912607] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:11.912616] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.188 [2024-07-13 21:22:11.922984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.188 qpair failed and we were unable to recover it. 00:35:21.188 [2024-07-13 21:22:11.932695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.188 [2024-07-13 21:22:11.932735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.188 [2024-07-13 21:22:11.932752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.188 [2024-07-13 21:22:11.932762] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:11.932771] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.188 [2024-07-13 21:22:11.942960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.188 qpair failed and we were unable to recover it. 00:35:21.188 [2024-07-13 21:22:11.952732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.188 [2024-07-13 21:22:11.952773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.188 [2024-07-13 21:22:11.952790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.188 [2024-07-13 21:22:11.952800] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:11.952809] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.188 [2024-07-13 21:22:11.963228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.188 qpair failed and we were unable to recover it. 00:35:21.188 [2024-07-13 21:22:11.972906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.188 [2024-07-13 21:22:11.972941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.188 [2024-07-13 21:22:11.972958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.188 [2024-07-13 21:22:11.972967] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:11.972976] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.188 [2024-07-13 21:22:11.983404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.188 qpair failed and we were unable to recover it. 00:35:21.188 [2024-07-13 21:22:11.992845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.188 [2024-07-13 21:22:11.992884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.188 [2024-07-13 21:22:11.992900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.188 [2024-07-13 21:22:11.992909] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:11.992918] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.188 [2024-07-13 21:22:12.003240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.188 qpair failed and we were unable to recover it. 00:35:21.188 [2024-07-13 21:22:12.012869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.188 [2024-07-13 21:22:12.012910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.188 [2024-07-13 21:22:12.012932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.188 [2024-07-13 21:22:12.012941] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:12.012950] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.188 [2024-07-13 21:22:12.023420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.188 qpair failed and we were unable to recover it. 00:35:21.188 [2024-07-13 21:22:12.032893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.188 [2024-07-13 21:22:12.032930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.188 [2024-07-13 21:22:12.032947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.188 [2024-07-13 21:22:12.032957] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:12.032966] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.188 [2024-07-13 21:22:12.043479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.188 qpair failed and we were unable to recover it. 00:35:21.188 [2024-07-13 21:22:12.053032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.188 [2024-07-13 21:22:12.053075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.188 [2024-07-13 21:22:12.053092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.188 [2024-07-13 21:22:12.053102] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:12.053111] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.188 [2024-07-13 21:22:12.063593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.188 qpair failed and we were unable to recover it. 00:35:21.188 [2024-07-13 21:22:12.073003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.188 [2024-07-13 21:22:12.073051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.188 [2024-07-13 21:22:12.073067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.188 [2024-07-13 21:22:12.073077] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.188 [2024-07-13 21:22:12.073085] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.448 [2024-07-13 21:22:12.083509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.448 qpair failed and we were unable to recover it. 00:35:21.448 [2024-07-13 21:22:12.093111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.448 [2024-07-13 21:22:12.093154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.448 [2024-07-13 21:22:12.093170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.448 [2024-07-13 21:22:12.093180] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.448 [2024-07-13 21:22:12.093193] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.448 [2024-07-13 21:22:12.103618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.448 qpair failed and we were unable to recover it. 00:35:21.448 [2024-07-13 21:22:12.113108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.448 [2024-07-13 21:22:12.113143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.448 [2024-07-13 21:22:12.113160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.448 [2024-07-13 21:22:12.113169] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.448 [2024-07-13 21:22:12.113178] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.448 [2024-07-13 21:22:12.123559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.448 qpair failed and we were unable to recover it. 00:35:21.448 [2024-07-13 21:22:12.133324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.448 [2024-07-13 21:22:12.133364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.448 [2024-07-13 21:22:12.133380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.448 [2024-07-13 21:22:12.133390] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.448 [2024-07-13 21:22:12.133399] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.448 [2024-07-13 21:22:12.143698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.448 qpair failed and we were unable to recover it. 00:35:21.448 [2024-07-13 21:22:12.153230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.448 [2024-07-13 21:22:12.153271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.448 [2024-07-13 21:22:12.153288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.448 [2024-07-13 21:22:12.153297] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.448 [2024-07-13 21:22:12.153306] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.448 [2024-07-13 21:22:12.163563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.448 qpair failed and we were unable to recover it. 00:35:21.448 [2024-07-13 21:22:12.173332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.448 [2024-07-13 21:22:12.173377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.448 [2024-07-13 21:22:12.173393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.448 [2024-07-13 21:22:12.173403] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.448 [2024-07-13 21:22:12.173411] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.448 [2024-07-13 21:22:12.183893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.448 qpair failed and we were unable to recover it. 00:35:21.448 [2024-07-13 21:22:12.193326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.448 [2024-07-13 21:22:12.193365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.449 [2024-07-13 21:22:12.193382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.449 [2024-07-13 21:22:12.193392] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.449 [2024-07-13 21:22:12.193400] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.449 [2024-07-13 21:22:12.203801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.449 qpair failed and we were unable to recover it. 00:35:21.449 [2024-07-13 21:22:12.213572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.449 [2024-07-13 21:22:12.213613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.449 [2024-07-13 21:22:12.213631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.449 [2024-07-13 21:22:12.213641] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.449 [2024-07-13 21:22:12.213650] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.449 [2024-07-13 21:22:12.224285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.449 qpair failed and we were unable to recover it. 00:35:21.449 [2024-07-13 21:22:12.233469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.449 [2024-07-13 21:22:12.233509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.449 [2024-07-13 21:22:12.233526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.449 [2024-07-13 21:22:12.233536] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.449 [2024-07-13 21:22:12.233546] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.449 [2024-07-13 21:22:12.243866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.449 qpair failed and we were unable to recover it. 00:35:21.449 [2024-07-13 21:22:12.253565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.449 [2024-07-13 21:22:12.253606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.449 [2024-07-13 21:22:12.253625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.449 [2024-07-13 21:22:12.253634] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.449 [2024-07-13 21:22:12.253643] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.449 [2024-07-13 21:22:12.264142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.449 qpair failed and we were unable to recover it. 00:35:21.449 [2024-07-13 21:22:12.273590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.449 [2024-07-13 21:22:12.273630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.449 [2024-07-13 21:22:12.273646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.449 [2024-07-13 21:22:12.273659] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.449 [2024-07-13 21:22:12.273668] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.449 [2024-07-13 21:22:12.284015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.449 qpair failed and we were unable to recover it. 00:35:21.449 [2024-07-13 21:22:12.293751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.449 [2024-07-13 21:22:12.293794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.449 [2024-07-13 21:22:12.293811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.449 [2024-07-13 21:22:12.293820] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.449 [2024-07-13 21:22:12.293830] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.449 [2024-07-13 21:22:12.304206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.449 qpair failed and we were unable to recover it. 00:35:21.449 [2024-07-13 21:22:12.313764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.449 [2024-07-13 21:22:12.313805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.449 [2024-07-13 21:22:12.313822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.449 [2024-07-13 21:22:12.313832] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.449 [2024-07-13 21:22:12.313841] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.449 [2024-07-13 21:22:12.324066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.449 qpair failed and we were unable to recover it. 00:35:21.449 [2024-07-13 21:22:12.333825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.449 [2024-07-13 21:22:12.333862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.449 [2024-07-13 21:22:12.333879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.449 [2024-07-13 21:22:12.333889] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.449 [2024-07-13 21:22:12.333898] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.709 [2024-07-13 21:22:12.344263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.709 qpair failed and we were unable to recover it. 00:35:21.709 [2024-07-13 21:22:12.353807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.709 [2024-07-13 21:22:12.353843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.709 [2024-07-13 21:22:12.353860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.709 [2024-07-13 21:22:12.353870] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.709 [2024-07-13 21:22:12.353879] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.709 [2024-07-13 21:22:12.364318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.709 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.373968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.374002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.374027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.374037] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.374046] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.384476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.394060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.394100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.394116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.394126] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.394134] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.404471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.414098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.414141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.414158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.414167] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.414176] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.424579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.434067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.434110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.434127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.434136] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.434146] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.444578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.454243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.454283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.454303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.454312] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.454321] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.464838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.474256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.474294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.474310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.474320] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.474328] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.484660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.494340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.494378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.494394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.494404] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.494413] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.505041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.514419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.514463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.514480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.514490] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.514499] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.524869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.534465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.534500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.534516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.534526] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.534537] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.544859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.554523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.554562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.554578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.554588] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.554597] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.564840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.574807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.574848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.574864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.574874] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.574883] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.710 [2024-07-13 21:22:12.584851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.710 qpair failed and we were unable to recover it. 00:35:21.710 [2024-07-13 21:22:12.594650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.710 [2024-07-13 21:22:12.594694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.710 [2024-07-13 21:22:12.594710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.710 [2024-07-13 21:22:12.594719] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.710 [2024-07-13 21:22:12.594729] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.970 [2024-07-13 21:22:12.605316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.970 qpair failed and we were unable to recover it. 00:35:21.970 [2024-07-13 21:22:12.614739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.970 [2024-07-13 21:22:12.614772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.970 [2024-07-13 21:22:12.614789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.614799] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.614808] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.625150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.634700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.634741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.634757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.634767] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.634775] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.645277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.654927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.654964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.654981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.654990] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.654999] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.665307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.674840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.674879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.674895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.674904] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.674913] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.685248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.694998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.695043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.695059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.695069] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.695078] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.705641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.715037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.715078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.715094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.715107] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.715116] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.725498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.735107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.735147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.735164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.735173] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.735182] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.745454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.755158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.755201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.755217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.755227] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.755236] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.765563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.775277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.775311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.775327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.775337] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.775345] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.785814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.795318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.795358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.795374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.795383] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.795392] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.805799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.815402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.815452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.815469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.815479] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.815488] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.825855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.835413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.835451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.835468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.835478] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.835486] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:21.971 [2024-07-13 21:22:12.845812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:21.971 qpair failed and we were unable to recover it. 00:35:21.971 [2024-07-13 21:22:12.855524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.971 [2024-07-13 21:22:12.855557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.971 [2024-07-13 21:22:12.855573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.971 [2024-07-13 21:22:12.855582] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.971 [2024-07-13 21:22:12.855591] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.231 [2024-07-13 21:22:12.866423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.231 qpair failed and we were unable to recover it. 00:35:22.231 [2024-07-13 21:22:12.875561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.231 [2024-07-13 21:22:12.875600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.231 [2024-07-13 21:22:12.875617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.231 [2024-07-13 21:22:12.875626] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.231 [2024-07-13 21:22:12.875635] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.231 [2024-07-13 21:22:12.885918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.231 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:12.895583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:12.895625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:12.895644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:12.895653] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:12.895662] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:12.905981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:12.915575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:12.915613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:12.915630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:12.915640] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:12.915649] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:12.925934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:12.935672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:12.935712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:12.935729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:12.935738] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:12.935747] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:12.946032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:12.955762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:12.955804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:12.955820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:12.955830] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:12.955839] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:12.966191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:12.975788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:12.975830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:12.975846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:12.975855] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:12.975867] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:12.986172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:12.995847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:12.995884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:12.995900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:12.995910] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:12.995918] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:13.006126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:13.015902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:13.015944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:13.015962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:13.015972] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:13.015981] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:13.026448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:13.035955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:13.035999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:13.036020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:13.036030] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:13.036040] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:13.046396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:13.056065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:13.056105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:13.056121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:13.056131] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:13.056140] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:13.066632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:13.076137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:13.076172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:13.076189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:13.076199] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:13.076208] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:13.086360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:13.096174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:13.096210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:13.096227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:13.096236] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:13.096245] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.232 [2024-07-13 21:22:13.106682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.232 qpair failed and we were unable to recover it. 00:35:22.232 [2024-07-13 21:22:13.116359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.232 [2024-07-13 21:22:13.116400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.232 [2024-07-13 21:22:13.116417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.232 [2024-07-13 21:22:13.116427] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.232 [2024-07-13 21:22:13.116436] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.126641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.136337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.136379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.136396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.136406] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.136415] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.146669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.156395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.156435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.156453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.156466] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.156475] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.166918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.176363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.176406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.176423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.176433] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.176441] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.186838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.196453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.196494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.196510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.196520] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.196528] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.206942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.216540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.216582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.216599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.216609] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.216617] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.227051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.236475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.236519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.236535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.236544] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.236553] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.246967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.256556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.256599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.256616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.256625] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.256634] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.267064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.276677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.276718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.276734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.276744] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.276753] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.287158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.296702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.296747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.296765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.296775] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.296785] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.307197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.316704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.316741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.316760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.316772] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.316792] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.327328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.336757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.336798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.336821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.336831] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.336840] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.347323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.356740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.356780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.356797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.356807] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.356816] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.367343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.501 [2024-07-13 21:22:13.376915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.501 [2024-07-13 21:22:13.376957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.501 [2024-07-13 21:22:13.376974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.501 [2024-07-13 21:22:13.376984] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.501 [2024-07-13 21:22:13.376993] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.501 [2024-07-13 21:22:13.387461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.501 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.396849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.396885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.396903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.396913] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.396922] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.407265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.417056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.417094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.417111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.417121] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.417134] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.427511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.437057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.437097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.437114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.437123] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.437133] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.447731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.457261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.457305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.457322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.457331] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.457341] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.467750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.477163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.477199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.477215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.477226] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.477235] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.487787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.497413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.497450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.497467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.497476] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.497485] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.508259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.517368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.517406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.517423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.517433] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.517442] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.527822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.537393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.537441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.537457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.537467] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.537476] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.547814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.557515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.557556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.557572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.557582] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.557591] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.567845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.577562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.577602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.577618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.577628] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.577637] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.588088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.597566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.597604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.597621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.597633] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.597642] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.607999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.617607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.617646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.617663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.617673] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.617681] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.628186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:22.762 [2024-07-13 21:22:13.637583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:22.762 [2024-07-13 21:22:13.637626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:22.762 [2024-07-13 21:22:13.637642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:22.762 [2024-07-13 21:22:13.637652] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:22.762 [2024-07-13 21:22:13.637660] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:22.762 [2024-07-13 21:22:13.648026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.762 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.657791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.657831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.657848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.657857] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.657866] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.668273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.677768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.677810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.677827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.677836] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.677845] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.688093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.697806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.697845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.697862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.697871] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.697880] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.708403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.717935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.717975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.717992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.718001] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.718010] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.728409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.738057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.738091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.738108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.738117] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.738126] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.748363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.758067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.758107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.758124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.758134] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.758143] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.768481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.778151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.778195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.778214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.778224] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.778233] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.788690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.798140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.798184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.798201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.798210] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.798219] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.808685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.818342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.818383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.818400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.818410] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.818419] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.828799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.838243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.838283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.838299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.838309] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.838318] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.848720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.858418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.858458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.858475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.858484] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.858496] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.868851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.021 [2024-07-13 21:22:13.878423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.021 [2024-07-13 21:22:13.878465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.021 [2024-07-13 21:22:13.878481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.021 [2024-07-13 21:22:13.878490] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.021 [2024-07-13 21:22:13.878499] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.021 [2024-07-13 21:22:13.888517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.021 qpair failed and we were unable to recover it. 00:35:23.022 [2024-07-13 21:22:13.898411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.022 [2024-07-13 21:22:13.898448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.022 [2024-07-13 21:22:13.898465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.022 [2024-07-13 21:22:13.898475] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.022 [2024-07-13 21:22:13.898483] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.022 [2024-07-13 21:22:13.908983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.022 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:13.918532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:13.918573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:13.918591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:13.918600] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:13.918609] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:13.928862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:13.938600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:13.938644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:13.938661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:13.938670] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:13.938679] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:13.949185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:13.958622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:13.958662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:13.958679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:13.958688] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:13.958697] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:13.969032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:13.978747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:13.978785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:13.978802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:13.978812] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:13.978821] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:13.989378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:13.998759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:13.998796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:13.998813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:13.998823] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:13.998831] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:14.009278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:14.018786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:14.018825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:14.018842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:14.018851] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:14.018860] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:14.029381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:14.038857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:14.038896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:14.038912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:14.038925] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:14.038934] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:14.049126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:14.058971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:14.059021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:14.059038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:14.059048] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:14.059058] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:14.069595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:14.078964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:14.079006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:14.079029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:14.079039] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:14.079048] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:14.089388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:14.099050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.281 [2024-07-13 21:22:14.099094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.281 [2024-07-13 21:22:14.099110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.281 [2024-07-13 21:22:14.099119] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.281 [2024-07-13 21:22:14.099128] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.281 [2024-07-13 21:22:14.109595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.281 qpair failed and we were unable to recover it. 00:35:23.281 [2024-07-13 21:22:14.119146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.282 [2024-07-13 21:22:14.119187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.282 [2024-07-13 21:22:14.119205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.282 [2024-07-13 21:22:14.119214] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.282 [2024-07-13 21:22:14.119224] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.282 [2024-07-13 21:22:14.129522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.282 qpair failed and we were unable to recover it. 00:35:23.282 [2024-07-13 21:22:14.139263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.282 [2024-07-13 21:22:14.139299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.282 [2024-07-13 21:22:14.139315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.282 [2024-07-13 21:22:14.139325] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.282 [2024-07-13 21:22:14.139334] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.282 [2024-07-13 21:22:14.149948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.282 qpair failed and we were unable to recover it. 00:35:23.282 [2024-07-13 21:22:14.159250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.282 [2024-07-13 21:22:14.159292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.282 [2024-07-13 21:22:14.159308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.282 [2024-07-13 21:22:14.159318] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.282 [2024-07-13 21:22:14.159326] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.282 [2024-07-13 21:22:14.169645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.282 qpair failed and we were unable to recover it. 00:35:23.541 [2024-07-13 21:22:14.179376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.541 [2024-07-13 21:22:14.179414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.541 [2024-07-13 21:22:14.179431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.541 [2024-07-13 21:22:14.179440] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.541 [2024-07-13 21:22:14.179449] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.541 [2024-07-13 21:22:14.189824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.541 qpair failed and we were unable to recover it. 00:35:23.541 [2024-07-13 21:22:14.199404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.541 [2024-07-13 21:22:14.199443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.541 [2024-07-13 21:22:14.199460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.541 [2024-07-13 21:22:14.199469] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.541 [2024-07-13 21:22:14.199478] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.541 [2024-07-13 21:22:14.209712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.541 qpair failed and we were unable to recover it. 00:35:23.541 [2024-07-13 21:22:14.219489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.541 [2024-07-13 21:22:14.219525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.541 [2024-07-13 21:22:14.219545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.541 [2024-07-13 21:22:14.219555] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.541 [2024-07-13 21:22:14.219564] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.541 [2024-07-13 21:22:14.229822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.541 qpair failed and we were unable to recover it. 00:35:23.541 [2024-07-13 21:22:14.239459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.239498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.239514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.239524] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.239533] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.249728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.542 [2024-07-13 21:22:14.259627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.259671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.259687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.259697] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.259705] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.270109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.542 [2024-07-13 21:22:14.279681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.279718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.279735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.279745] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.279754] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.290094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.542 [2024-07-13 21:22:14.299650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.299690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.299706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.299716] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.299728] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.310257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.542 [2024-07-13 21:22:14.319709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.319748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.319765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.319775] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.319784] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.330008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.542 [2024-07-13 21:22:14.339866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.339913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.339929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.339939] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.339948] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.350213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.542 [2024-07-13 21:22:14.359891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.359936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.359953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.359962] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.359972] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.370191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.542 [2024-07-13 21:22:14.379978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.380021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.380038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.380048] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.380057] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.390501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.542 [2024-07-13 21:22:14.400025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.400066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.400083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.400092] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.400101] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.410349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.542 [2024-07-13 21:22:14.420049] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.542 [2024-07-13 21:22:14.420091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.542 [2024-07-13 21:22:14.420109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.542 [2024-07-13 21:22:14.420118] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.542 [2024-07-13 21:22:14.420127] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.542 [2024-07-13 21:22:14.430709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.542 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.440113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.440153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.440171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.440181] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.440190] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.450464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.460254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.460293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.460310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.460320] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.460328] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.470549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.480276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.480315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.480332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.480345] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.480354] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.490647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.500369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.500414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.500431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.500441] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.500450] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.510764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.520321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.520357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.520375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.520385] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.520394] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.530519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.540504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.540547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.540565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.540577] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.540588] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.550868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.560423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.560464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.560481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.560491] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.560500] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.570730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.580649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.580688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.580705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.580714] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.580723] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.591142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.600622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.600664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.600682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.600691] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.600700] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.610902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.620617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.620653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.620671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.620680] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.620689] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.631201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.640622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.640663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.640681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.640690] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.640699] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.651028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.660728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.802 [2024-07-13 21:22:14.660766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.802 [2024-07-13 21:22:14.660788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.802 [2024-07-13 21:22:14.660798] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.802 [2024-07-13 21:22:14.660807] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.802 [2024-07-13 21:22:14.671224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.802 qpair failed and we were unable to recover it. 00:35:23.802 [2024-07-13 21:22:14.680876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:23.803 [2024-07-13 21:22:14.680915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:23.803 [2024-07-13 21:22:14.680932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:23.803 [2024-07-13 21:22:14.680942] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:23.803 [2024-07-13 21:22:14.680951] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:23.803 [2024-07-13 21:22:14.691102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:23.803 qpair failed and we were unable to recover it. 00:35:24.062 [2024-07-13 21:22:14.700949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.062 [2024-07-13 21:22:14.700991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.062 [2024-07-13 21:22:14.701008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.062 [2024-07-13 21:22:14.701023] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.062 [2024-07-13 21:22:14.701032] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.062 [2024-07-13 21:22:14.711433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.062 qpair failed and we were unable to recover it. 00:35:24.062 [2024-07-13 21:22:14.720876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.062 [2024-07-13 21:22:14.720918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.062 [2024-07-13 21:22:14.720936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.062 [2024-07-13 21:22:14.720945] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.062 [2024-07-13 21:22:14.720954] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.062 [2024-07-13 21:22:14.731245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.062 qpair failed and we were unable to recover it. 00:35:24.062 [2024-07-13 21:22:14.741087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.062 [2024-07-13 21:22:14.741133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.062 [2024-07-13 21:22:14.741150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.062 [2024-07-13 21:22:14.741159] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.062 [2024-07-13 21:22:14.741172] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.062 [2024-07-13 21:22:14.751437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.062 qpair failed and we were unable to recover it. 00:35:24.062 [2024-07-13 21:22:14.761058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.062 [2024-07-13 21:22:14.761099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.062 [2024-07-13 21:22:14.761115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.062 [2024-07-13 21:22:14.761124] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.062 [2024-07-13 21:22:14.761133] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.062 [2024-07-13 21:22:14.771501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.062 qpair failed and we were unable to recover it. 00:35:24.062 [2024-07-13 21:22:14.781168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.062 [2024-07-13 21:22:14.781205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.062 [2024-07-13 21:22:14.781222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.062 [2024-07-13 21:22:14.781232] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.062 [2024-07-13 21:22:14.781241] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.063 [2024-07-13 21:22:14.791875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.063 qpair failed and we were unable to recover it. 00:35:24.063 [2024-07-13 21:22:14.801093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.063 [2024-07-13 21:22:14.801132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.063 [2024-07-13 21:22:14.801149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.063 [2024-07-13 21:22:14.801158] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.063 [2024-07-13 21:22:14.801167] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.063 [2024-07-13 21:22:14.811474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.063 qpair failed and we were unable to recover it. 00:35:24.063 [2024-07-13 21:22:14.821089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.063 [2024-07-13 21:22:14.821132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.063 [2024-07-13 21:22:14.821149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.063 [2024-07-13 21:22:14.821159] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.063 [2024-07-13 21:22:14.821167] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.063 [2024-07-13 21:22:14.831747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.063 qpair failed and we were unable to recover it. 00:35:24.063 [2024-07-13 21:22:14.841201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.063 [2024-07-13 21:22:14.841241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.063 [2024-07-13 21:22:14.841259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.063 [2024-07-13 21:22:14.841268] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.063 [2024-07-13 21:22:14.841277] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.063 [2024-07-13 21:22:14.851515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.063 qpair failed and we were unable to recover it. 00:35:24.063 [2024-07-13 21:22:14.861343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.063 [2024-07-13 21:22:14.861380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.063 [2024-07-13 21:22:14.861398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.063 [2024-07-13 21:22:14.861407] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.063 [2024-07-13 21:22:14.861416] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.063 [2024-07-13 21:22:14.871903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.063 qpair failed and we were unable to recover it. 00:35:24.063 [2024-07-13 21:22:14.881315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.063 [2024-07-13 21:22:14.881353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.063 [2024-07-13 21:22:14.881370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.063 [2024-07-13 21:22:14.881379] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.063 [2024-07-13 21:22:14.881388] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.063 [2024-07-13 21:22:14.891712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.063 qpair failed and we were unable to recover it. 00:35:24.063 [2024-07-13 21:22:14.901426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.063 [2024-07-13 21:22:14.901468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.063 [2024-07-13 21:22:14.901484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.063 [2024-07-13 21:22:14.901493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.063 [2024-07-13 21:22:14.901502] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.063 [2024-07-13 21:22:14.911993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.063 qpair failed and we were unable to recover it. 00:35:24.063 [2024-07-13 21:22:14.921585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.063 [2024-07-13 21:22:14.921628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.063 [2024-07-13 21:22:14.921645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.063 [2024-07-13 21:22:14.921657] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.063 [2024-07-13 21:22:14.921666] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.063 [2024-07-13 21:22:14.931951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.063 qpair failed and we were unable to recover it. 00:35:24.063 [2024-07-13 21:22:14.941585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.063 [2024-07-13 21:22:14.941622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.063 [2024-07-13 21:22:14.941639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.063 [2024-07-13 21:22:14.941648] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.063 [2024-07-13 21:22:14.941657] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.063 [2024-07-13 21:22:14.952062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.063 qpair failed and we were unable to recover it. 00:35:24.323 [2024-07-13 21:22:14.961657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.323 [2024-07-13 21:22:14.961699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.323 [2024-07-13 21:22:14.961716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.323 [2024-07-13 21:22:14.961725] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.323 [2024-07-13 21:22:14.961734] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.323 [2024-07-13 21:22:14.972201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.323 qpair failed and we were unable to recover it. 00:35:24.323 [2024-07-13 21:22:14.981702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.323 [2024-07-13 21:22:14.981745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.323 [2024-07-13 21:22:14.981762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.323 [2024-07-13 21:22:14.981771] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.323 [2024-07-13 21:22:14.981780] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.323 [2024-07-13 21:22:14.992201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.323 qpair failed and we were unable to recover it. 00:35:24.323 [2024-07-13 21:22:15.001681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.323 [2024-07-13 21:22:15.001721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.323 [2024-07-13 21:22:15.001738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.323 [2024-07-13 21:22:15.001748] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.323 [2024-07-13 21:22:15.001757] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.323 [2024-07-13 21:22:15.012142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.323 qpair failed and we were unable to recover it. 00:35:24.323 [2024-07-13 21:22:15.021725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.323 [2024-07-13 21:22:15.021764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.323 [2024-07-13 21:22:15.021781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.323 [2024-07-13 21:22:15.021791] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.323 [2024-07-13 21:22:15.021800] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.323 [2024-07-13 21:22:15.032426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.323 qpair failed and we were unable to recover it. 00:35:24.323 [2024-07-13 21:22:15.041749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:24.323 [2024-07-13 21:22:15.041789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:24.323 [2024-07-13 21:22:15.041805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:24.323 [2024-07-13 21:22:15.041814] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:24.323 [2024-07-13 21:22:15.041823] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:24.323 [2024-07-13 21:22:15.052349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:24.323 qpair failed and we were unable to recover it. 00:35:25.261 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Write completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 Read completed with error (sct=0, sc=8) 00:35:25.262 starting I/O failed 00:35:25.262 [2024-07-13 21:22:16.057401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:25.262 [2024-07-13 21:22:16.064654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:25.262 [2024-07-13 21:22:16.064696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:25.262 [2024-07-13 21:22:16.064716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:25.262 [2024-07-13 21:22:16.064726] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:25.262 [2024-07-13 21:22:16.064735] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:35:25.262 [2024-07-13 21:22:16.075488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:25.262 qpair failed and we were unable to recover it. 00:35:25.262 [2024-07-13 21:22:16.085065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:25.262 [2024-07-13 21:22:16.085101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:25.262 [2024-07-13 21:22:16.085120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:25.262 [2024-07-13 21:22:16.085129] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:25.262 [2024-07-13 21:22:16.085138] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:35:25.262 [2024-07-13 21:22:16.095362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:25.262 qpair failed and we were unable to recover it. 00:35:25.262 [2024-07-13 21:22:16.105229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:25.262 [2024-07-13 21:22:16.105275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:25.262 [2024-07-13 21:22:16.105297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:25.262 [2024-07-13 21:22:16.105307] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:25.262 [2024-07-13 21:22:16.105317] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:25.262 [2024-07-13 21:22:16.115485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:25.262 qpair failed and we were unable to recover it. 00:35:25.262 [2024-07-13 21:22:16.125185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:25.262 [2024-07-13 21:22:16.125226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:25.262 [2024-07-13 21:22:16.125244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:25.262 [2024-07-13 21:22:16.125254] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:25.262 [2024-07-13 21:22:16.125262] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:25.262 [2024-07-13 21:22:16.135659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:25.262 qpair failed and we were unable to recover it. 00:35:25.262 [2024-07-13 21:22:16.135786] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:25.262 A controller has encountered a failure and is being reset. 00:35:25.262 [2024-07-13 21:22:16.145347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:25.262 [2024-07-13 21:22:16.145396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:25.262 [2024-07-13 21:22:16.145428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:25.262 [2024-07-13 21:22:16.145443] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:25.262 [2024-07-13 21:22:16.145456] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:25.557 [2024-07-13 21:22:16.155650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:25.557 qpair failed and we were unable to recover it. 00:35:25.557 [2024-07-13 21:22:16.165276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:25.557 [2024-07-13 21:22:16.165321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:25.557 [2024-07-13 21:22:16.165339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:25.557 [2024-07-13 21:22:16.165349] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:25.557 [2024-07-13 21:22:16.165358] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:25.557 [2024-07-13 21:22:16.175805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:25.557 qpair failed and we were unable to recover it. 00:35:25.557 [2024-07-13 21:22:16.175932] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:25.557 [2024-07-13 21:22:16.209424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:35:25.557 Controller properly reset. 00:35:25.557 Initializing NVMe Controllers 00:35:25.557 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:25.557 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:25.557 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:25.557 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:25.557 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:25.557 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:25.557 Initialization complete. Launching workers. 00:35:25.557 Starting thread on core 1 00:35:25.557 Starting thread on core 2 00:35:25.557 Starting thread on core 3 00:35:25.557 Starting thread on core 0 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:25.557 00:35:25.557 real 0m12.550s 00:35:25.557 user 0m27.080s 00:35:25.557 sys 0m3.132s 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:25.557 ************************************ 00:35:25.557 END TEST nvmf_target_disconnect_tc2 00:35:25.557 ************************************ 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:25.557 ************************************ 00:35:25.557 START TEST nvmf_target_disconnect_tc3 00:35:25.557 ************************************ 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc3 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3761445 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:35:25.557 21:22:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:35:25.557 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.497 21:22:18 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3760289 00:35:27.497 21:22:18 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:35:28.876 Write completed with error (sct=0, sc=8) 00:35:28.876 starting I/O failed 00:35:28.876 Read completed with error (sct=0, sc=8) 00:35:28.876 starting I/O failed 00:35:28.876 Read completed with error (sct=0, sc=8) 00:35:28.876 starting I/O failed 00:35:28.876 Read completed with error (sct=0, sc=8) 00:35:28.876 starting I/O failed 00:35:28.876 Write completed with error (sct=0, sc=8) 00:35:28.876 starting I/O failed 00:35:28.876 Read completed with error (sct=0, sc=8) 00:35:28.876 starting I/O failed 00:35:28.876 Read completed with error (sct=0, sc=8) 00:35:28.876 starting I/O failed 00:35:28.876 Read completed with error (sct=0, sc=8) 00:35:28.876 starting I/O failed 00:35:28.877 Read completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Read completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Read completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Read completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Read completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Read completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Read completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Read completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Read completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 Write completed with error (sct=0, sc=8) 00:35:28.877 starting I/O failed 00:35:28.877 [2024-07-13 21:22:19.521999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:29.815 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3760289 Killed "${NVMF_APP[@]}" "$@" 00:35:29.815 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:35:29.815 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:29.815 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:29.815 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:29.815 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:29.815 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3762192 00:35:29.816 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3762192 00:35:29.816 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:29.816 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3762192 ']' 00:35:29.816 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.816 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:29.816 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.816 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:29.816 21:22:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:29.816 [2024-07-13 21:22:20.406715] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:29.816 [2024-07-13 21:22:20.406765] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.816 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.816 [2024-07-13 21:22:20.496034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Read completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 Write completed with error (sct=0, sc=8) 00:35:29.816 starting I/O failed 00:35:29.816 [2024-07-13 21:22:20.527223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:29.816 [2024-07-13 21:22:20.534802] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.816 [2024-07-13 21:22:20.534836] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.816 [2024-07-13 21:22:20.534846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.816 [2024-07-13 21:22:20.534855] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.816 [2024-07-13 21:22:20.534862] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.816 [2024-07-13 21:22:20.535003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:35:29.816 [2024-07-13 21:22:20.535113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:35:29.816 [2024-07-13 21:22:20.535223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:35:29.816 [2024-07-13 21:22:20.535224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # return 0 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:30.385 Malloc0 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.385 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:30.644 [2024-07-13 21:22:21.301390] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1925e40/0x1932340) succeed. 00:35:30.644 [2024-07-13 21:22:21.312060] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1927480/0x19b2380) succeed. 00:35:30.644 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:30.645 [2024-07-13 21:22:21.449517] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.645 21:22:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3761445 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Write completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 Read completed with error (sct=0, sc=8) 00:35:30.645 starting I/O failed 00:35:30.645 [2024-07-13 21:22:21.532217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:30.645 [2024-07-13 21:22:21.533758] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:30.645 [2024-07-13 21:22:21.533779] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:30.645 [2024-07-13 21:22:21.533788] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:32.024 [2024-07-13 21:22:22.537785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:32.024 qpair failed and we were unable to recover it. 00:35:32.024 [2024-07-13 21:22:22.539335] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:32.024 [2024-07-13 21:22:22.539353] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:32.024 [2024-07-13 21:22:22.539361] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:32.961 [2024-07-13 21:22:23.543165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:32.961 qpair failed and we were unable to recover it. 00:35:32.961 [2024-07-13 21:22:23.544627] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:32.961 [2024-07-13 21:22:23.544644] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:32.961 [2024-07-13 21:22:23.544652] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:33.897 [2024-07-13 21:22:24.548586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:33.897 qpair failed and we were unable to recover it. 00:35:33.897 [2024-07-13 21:22:24.550033] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:33.897 [2024-07-13 21:22:24.550051] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:33.897 [2024-07-13 21:22:24.550059] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:34.836 [2024-07-13 21:22:25.553925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-07-13 21:22:25.555541] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:34.836 [2024-07-13 21:22:25.555558] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:34.836 [2024-07-13 21:22:25.555566] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:35.772 [2024-07-13 21:22:26.559389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:35.772 qpair failed and we were unable to recover it. 00:35:35.772 [2024-07-13 21:22:26.560832] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:35.772 [2024-07-13 21:22:26.560852] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:35.772 [2024-07-13 21:22:26.560861] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:36.708 [2024-07-13 21:22:27.564612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:36.708 qpair failed and we were unable to recover it. 00:35:36.708 [2024-07-13 21:22:27.565941] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:36.708 [2024-07-13 21:22:27.565959] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:36.708 [2024-07-13 21:22:27.565968] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:38.086 [2024-07-13 21:22:28.569934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:38.086 qpair failed and we were unable to recover it. 00:35:38.086 [2024-07-13 21:22:28.571526] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:38.086 [2024-07-13 21:22:28.571550] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:38.086 [2024-07-13 21:22:28.571558] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:39.023 [2024-07-13 21:22:29.575393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:39.023 qpair failed and we were unable to recover it. 00:35:39.023 [2024-07-13 21:22:29.576836] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:39.023 [2024-07-13 21:22:29.576856] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:39.023 [2024-07-13 21:22:29.576865] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:39.960 [2024-07-13 21:22:30.580919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:39.960 qpair failed and we were unable to recover it. 00:35:39.960 [2024-07-13 21:22:30.581046] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:39.960 A controller has encountered a failure and is being reset. 00:35:39.960 Resorting to new failover address 192.168.100.9 00:35:39.960 [2024-07-13 21:22:30.582649] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:39.960 [2024-07-13 21:22:30.582679] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:39.960 [2024-07-13 21:22:30.582691] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:40.893 [2024-07-13 21:22:31.586501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:40.893 qpair failed and we were unable to recover it. 00:35:40.893 [2024-07-13 21:22:31.588028] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:40.893 [2024-07-13 21:22:31.588048] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:40.893 [2024-07-13 21:22:31.588056] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:41.828 [2024-07-13 21:22:32.592003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:41.829 qpair failed and we were unable to recover it. 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Read completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 Write completed with error (sct=0, sc=8) 00:35:42.764 starting I/O failed 00:35:42.764 [2024-07-13 21:22:33.597104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:42.764 [2024-07-13 21:22:33.597125] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.764 [2024-07-13 21:22:33.597236] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:42.764 [2024-07-13 21:22:33.627737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:35:42.764 Controller properly reset. 00:35:43.023 Initializing NVMe Controllers 00:35:43.023 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:43.023 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:43.023 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:43.023 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:43.023 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:43.023 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:43.023 Initialization complete. Launching workers. 00:35:43.023 Starting thread on core 1 00:35:43.023 Starting thread on core 2 00:35:43.023 Starting thread on core 3 00:35:43.023 Starting thread on core 0 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:35:43.023 00:35:43.023 real 0m17.354s 00:35:43.023 user 0m59.862s 00:35:43.023 sys 0m5.462s 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:43.023 ************************************ 00:35:43.023 END TEST nvmf_target_disconnect_tc3 00:35:43.023 ************************************ 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:43.023 rmmod nvme_rdma 00:35:43.023 rmmod nvme_fabrics 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3762192 ']' 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3762192 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3762192 ']' 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3762192 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3762192 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3762192' 00:35:43.023 killing process with pid 3762192 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3762192 00:35:43.023 21:22:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3762192 00:35:43.282 21:22:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:43.282 21:22:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:43.282 00:35:43.282 real 0m38.251s 00:35:43.282 user 2m23.493s 00:35:43.282 sys 0m14.269s 00:35:43.282 21:22:34 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:43.282 21:22:34 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:43.282 ************************************ 00:35:43.282 END TEST nvmf_target_disconnect 00:35:43.282 ************************************ 00:35:43.282 21:22:34 nvmf_rdma -- nvmf/nvmf.sh@126 -- # timing_exit host 00:35:43.282 21:22:34 nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:43.282 21:22:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:43.543 21:22:34 nvmf_rdma -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:35:43.543 00:35:43.543 real 27m50.178s 00:35:43.543 user 81m33.501s 00:35:43.543 sys 6m13.927s 00:35:43.543 21:22:34 nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:43.543 21:22:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:43.543 ************************************ 00:35:43.543 END TEST nvmf_rdma 00:35:43.543 ************************************ 00:35:43.543 21:22:34 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:35:43.543 21:22:34 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:43.543 21:22:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:43.543 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:35:43.543 ************************************ 00:35:43.543 START TEST spdkcli_nvmf_rdma 00:35:43.543 ************************************ 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:35:43.543 * Looking for test storage... 00:35:43.543 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3764501 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3764501 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@827 -- # '[' -z 3764501 ']' 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:43.543 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:43.802 [2024-07-13 21:22:34.473795] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:43.802 [2024-07-13 21:22:34.473846] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764501 ] 00:35:43.802 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.803 [2024-07-13 21:22:34.543943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:43.803 [2024-07-13 21:22:34.583704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.803 [2024-07-13 21:22:34.583706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.803 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:43.803 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # return 0 00:35:43.803 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:43.803 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:43.803 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:44.062 21:22:34 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:35:44.063 21:22:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:35:50.673 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:35:50.673 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:35:50.673 Found net devices under 0000:d9:00.0: mlx_0_0 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:35:50.673 Found net devices under 0000:d9:00.1: mlx_0_1 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:35:50.673 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:50.673 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:35:50.673 altname enp217s0f0np0 00:35:50.673 altname ens818f0np0 00:35:50.673 inet 192.168.100.8/24 scope global mlx_0_0 00:35:50.673 valid_lft forever preferred_lft forever 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:50.673 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:35:50.674 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:50.674 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:35:50.674 altname enp217s0f1np1 00:35:50.674 altname ens818f1np1 00:35:50.674 inet 192.168.100.9/24 scope global mlx_0_1 00:35:50.674 valid_lft forever preferred_lft forever 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:50.674 192.168.100.9' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:50.674 192.168.100.9' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:50.674 192.168.100.9' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:50.674 21:22:41 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:50.674 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:50.674 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:50.674 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:50.674 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:50.674 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:50.674 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:50.674 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:35:50.674 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:35:50.674 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:50.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:50.674 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:50.674 ' 00:35:53.209 [2024-07-13 21:22:43.882593] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b74f00/0x1cc5f80) succeed. 00:35:53.209 [2024-07-13 21:22:43.892111] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b765e0/0x1b85e00) succeed. 00:35:54.588 [2024-07-13 21:22:45.123020] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:35:56.491 [2024-07-13 21:22:47.285816] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:35:58.395 [2024-07-13 21:22:49.143868] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:35:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:59.773 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:59.773 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:59.773 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:59.773 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:59.773 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:59.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:59.773 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:00.032 21:22:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:00.032 21:22:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.032 21:22:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:00.032 21:22:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:00.032 21:22:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:00.032 21:22:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:00.032 21:22:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:36:00.032 21:22:50 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:00.290 21:22:51 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:00.290 21:22:51 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:00.290 21:22:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:00.290 21:22:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.290 21:22:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:00.290 21:22:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:00.290 21:22:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:00.290 21:22:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:00.549 21:22:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:00.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:00.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:00.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:00.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:36:00.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:36:00.549 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:00.549 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:00.549 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:00.549 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:00.549 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:00.549 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:00.549 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:00.549 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:00.549 ' 00:36:05.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:05.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:05.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:05.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:05.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:36:05.825 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:36:05.825 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:05.825 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:05.825 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:05.825 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:05.825 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:05.825 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:05.825 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:05.825 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3764501 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@946 -- # '[' -z 3764501 ']' 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # kill -0 3764501 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@951 -- # uname 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3764501 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3764501' 00:36:05.825 killing process with pid 3764501 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@965 -- # kill 3764501 00:36:05.825 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@970 -- # wait 3764501 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:36:06.084 rmmod nvme_rdma 00:36:06.084 rmmod nvme_fabrics 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:06.084 21:22:56 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:36:06.085 00:36:06.085 real 0m22.645s 00:36:06.085 user 0m48.466s 00:36:06.085 sys 0m6.032s 00:36:06.085 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:06.085 21:22:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:06.085 ************************************ 00:36:06.085 END TEST spdkcli_nvmf_rdma 00:36:06.085 ************************************ 00:36:06.085 21:22:56 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:06.085 21:22:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:06.085 21:22:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:06.085 21:22:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:06.085 21:22:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:06.085 21:22:56 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:06.085 21:22:56 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:06.085 21:22:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:06.085 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:36:06.344 21:22:56 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:06.344 21:22:56 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:36:06.344 21:22:56 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:36:06.344 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:36:12.915 INFO: APP EXITING 00:36:12.916 INFO: killing all VMs 00:36:12.916 INFO: killing vhost app 00:36:12.916 INFO: EXIT DONE 00:36:14.914 Waiting for block devices as requested 00:36:14.914 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:15.174 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:15.174 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:15.174 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:15.434 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:15.434 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:15.434 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:15.434 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:15.693 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:15.693 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:15.693 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:15.952 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:15.952 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:15.952 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:15.952 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:16.211 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:16.211 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:36:19.503 Cleaning 00:36:19.503 Removing: /var/run/dpdk/spdk0/config 00:36:19.503 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:19.503 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:19.503 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:19.503 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:19.503 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:19.503 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:19.503 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:19.503 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:19.503 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:19.503 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:19.503 Removing: /var/run/dpdk/spdk1/config 00:36:19.503 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:19.503 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:19.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:19.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:19.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:19.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:19.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:19.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:19.762 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:19.762 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:19.762 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:19.762 Removing: /var/run/dpdk/spdk2/config 00:36:19.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:19.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:19.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:19.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:19.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:19.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:19.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:19.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:19.762 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:19.762 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:19.762 Removing: /var/run/dpdk/spdk3/config 00:36:19.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:19.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:19.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:19.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:19.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:19.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:19.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:19.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:19.763 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:19.763 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:19.763 Removing: /var/run/dpdk/spdk4/config 00:36:19.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:19.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:19.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:19.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:19.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:19.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:19.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:19.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:19.763 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:19.763 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:19.763 Removing: /dev/shm/bdevperf_trace.pid3515598 00:36:19.763 Removing: /dev/shm/bdevperf_trace.pid3661801 00:36:19.763 Removing: /dev/shm/bdev_svc_trace.1 00:36:19.763 Removing: /dev/shm/nvmf_trace.0 00:36:19.763 Removing: /dev/shm/spdk_tgt_trace.pid3352532 00:36:19.763 Removing: /var/run/dpdk/spdk0 00:36:19.763 Removing: /var/run/dpdk/spdk1 00:36:19.763 Removing: /var/run/dpdk/spdk2 00:36:19.763 Removing: /var/run/dpdk/spdk3 00:36:19.763 Removing: /var/run/dpdk/spdk4 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3349806 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3351063 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3352532 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3353001 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3354074 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3354350 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3355218 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3355391 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3355598 00:36:19.763 Removing: /var/run/dpdk/spdk_pid3360684 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3362066 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3362400 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3362511 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3362845 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3363163 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3363446 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3363736 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3364016 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3364762 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3367790 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3368090 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3368333 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3368387 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3368949 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3368970 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3369552 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3369779 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3370074 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3370230 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3370687 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3370869 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3371585 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3371823 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3372102 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3372257 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3372293 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3372597 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3372865 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3373079 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3373287 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3373492 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3373771 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3374053 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3374335 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3374623 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3374903 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3375166 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3375356 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3375554 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3375798 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3376078 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3376359 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3376644 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3376932 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3377219 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3377506 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3377757 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3377858 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3378195 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3382063 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3476730 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3480871 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3491748 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3497063 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3500507 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3501311 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3515598 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3515881 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3519870 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3525478 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3528195 00:36:20.022 Removing: /var/run/dpdk/spdk_pid3538120 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3562734 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3566184 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3613447 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3618599 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3659795 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3660738 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3661801 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3665800 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3672820 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3673738 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3674548 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3675592 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3675870 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3680371 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3680377 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3684898 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3685431 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3685969 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3686759 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3686764 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3689172 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3691033 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3692885 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3694735 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3696530 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3698387 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3705102 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3705523 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3707790 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3708986 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3715748 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3718410 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3724011 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3733612 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3733621 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3753068 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3753332 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3759167 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3759520 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3761445 00:36:20.287 Removing: /var/run/dpdk/spdk_pid3764501 00:36:20.287 Clean 00:36:20.287 21:23:11 -- common/autotest_common.sh@1447 -- # return 0 00:36:20.287 21:23:11 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:20.287 21:23:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:20.287 21:23:11 -- common/autotest_common.sh@10 -- # set +x 00:36:20.545 21:23:11 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:20.545 21:23:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:20.545 21:23:11 -- common/autotest_common.sh@10 -- # set +x 00:36:20.545 21:23:11 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:36:20.545 21:23:11 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:36:20.545 21:23:11 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:36:20.545 21:23:11 -- spdk/autotest.sh@391 -- # hash lcov 00:36:20.545 21:23:11 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:20.545 21:23:11 -- spdk/autotest.sh@393 -- # hostname 00:36:20.545 21:23:11 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:36:20.545 geninfo: WARNING: invalid characters removed from testname! 00:36:38.629 21:23:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:36:38.892 21:23:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:36:40.800 21:23:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:36:42.178 21:23:32 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:36:44.081 21:23:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:36:45.460 21:23:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:36:47.366 21:23:37 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:47.366 21:23:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:47.366 21:23:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:47.366 21:23:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.366 21:23:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.366 21:23:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.366 21:23:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.366 21:23:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.366 21:23:37 -- paths/export.sh@5 -- $ export PATH 00:36:47.366 21:23:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.366 21:23:37 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:36:47.366 21:23:37 -- common/autobuild_common.sh@437 -- $ date +%s 00:36:47.366 21:23:37 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720898617.XXXXXX 00:36:47.366 21:23:37 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720898617.WzJkWa 00:36:47.366 21:23:37 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:36:47.366 21:23:37 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:36:47.366 21:23:37 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:36:47.366 21:23:37 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:36:47.366 21:23:37 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:47.366 21:23:37 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:47.366 21:23:37 -- common/autobuild_common.sh@453 -- $ get_config_params 00:36:47.366 21:23:37 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:36:47.366 21:23:37 -- common/autotest_common.sh@10 -- $ set +x 00:36:47.366 21:23:38 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:36:47.366 21:23:38 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:36:47.366 21:23:38 -- pm/common@17 -- $ local monitor 00:36:47.366 21:23:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:47.366 21:23:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:47.366 21:23:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:47.366 21:23:38 -- pm/common@21 -- $ date +%s 00:36:47.366 21:23:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:47.366 21:23:38 -- pm/common@21 -- $ date +%s 00:36:47.366 21:23:38 -- pm/common@21 -- $ date +%s 00:36:47.366 21:23:38 -- pm/common@25 -- $ sleep 1 00:36:47.366 21:23:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720898618 00:36:47.366 21:23:38 -- pm/common@21 -- $ date +%s 00:36:47.366 21:23:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720898618 00:36:47.366 21:23:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720898618 00:36:47.366 21:23:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720898618 00:36:47.366 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720898618_collect-cpu-temp.pm.log 00:36:47.366 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720898618_collect-vmstat.pm.log 00:36:47.366 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720898618_collect-cpu-load.pm.log 00:36:47.366 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720898618_collect-bmc-pm.bmc.pm.log 00:36:48.374 21:23:39 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:36:48.374 21:23:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:36:48.374 21:23:39 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:36:48.374 21:23:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:48.374 21:23:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:48.374 21:23:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:48.374 21:23:39 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:48.374 21:23:39 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:48.374 21:23:39 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:48.374 21:23:39 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:36:48.374 21:23:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:48.374 21:23:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:48.374 21:23:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:48.374 21:23:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:48.374 21:23:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:48.374 21:23:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:48.374 21:23:39 -- pm/common@44 -- $ pid=3784084 00:36:48.374 21:23:39 -- pm/common@50 -- $ kill -TERM 3784084 00:36:48.374 21:23:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:48.374 21:23:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:48.374 21:23:39 -- pm/common@44 -- $ pid=3784086 00:36:48.374 21:23:39 -- pm/common@50 -- $ kill -TERM 3784086 00:36:48.374 21:23:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:48.374 21:23:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:48.375 21:23:39 -- pm/common@44 -- $ pid=3784088 00:36:48.375 21:23:39 -- pm/common@50 -- $ kill -TERM 3784088 00:36:48.375 21:23:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:48.375 21:23:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:48.375 21:23:39 -- pm/common@44 -- $ pid=3784113 00:36:48.375 21:23:39 -- pm/common@50 -- $ sudo -E kill -TERM 3784113 00:36:48.375 + [[ -n 3227085 ]] 00:36:48.375 + sudo kill 3227085 00:36:48.384 [Pipeline] } 00:36:48.400 [Pipeline] // stage 00:36:48.405 [Pipeline] } 00:36:48.419 [Pipeline] // timeout 00:36:48.424 [Pipeline] } 00:36:48.438 [Pipeline] // catchError 00:36:48.444 [Pipeline] } 00:36:48.459 [Pipeline] // wrap 00:36:48.465 [Pipeline] } 00:36:48.480 [Pipeline] // catchError 00:36:48.489 [Pipeline] stage 00:36:48.491 [Pipeline] { (Epilogue) 00:36:48.504 [Pipeline] catchError 00:36:48.506 [Pipeline] { 00:36:48.517 [Pipeline] echo 00:36:48.518 Cleanup processes 00:36:48.523 [Pipeline] sh 00:36:48.805 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:36:48.805 3784192 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:36:48.805 3784535 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:36:48.820 [Pipeline] sh 00:36:49.104 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:36:49.104 ++ awk '{print $1}' 00:36:49.104 ++ grep -v 'sudo pgrep' 00:36:49.104 + sudo kill -9 3784192 00:36:49.117 [Pipeline] sh 00:36:49.401 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:49.401 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:36:54.671 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:36:57.971 [Pipeline] sh 00:36:58.256 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:58.256 Artifacts sizes are good 00:36:58.270 [Pipeline] archiveArtifacts 00:36:58.278 Archiving artifacts 00:36:58.474 [Pipeline] sh 00:36:58.758 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:36:58.773 [Pipeline] cleanWs 00:36:58.783 [WS-CLEANUP] Deleting project workspace... 00:36:58.783 [WS-CLEANUP] Deferred wipeout is used... 00:36:58.789 [WS-CLEANUP] done 00:36:58.791 [Pipeline] } 00:36:58.812 [Pipeline] // catchError 00:36:58.824 [Pipeline] sh 00:36:59.107 + logger -p user.info -t JENKINS-CI 00:36:59.117 [Pipeline] } 00:36:59.134 [Pipeline] // stage 00:36:59.139 [Pipeline] } 00:36:59.156 [Pipeline] // node 00:36:59.163 [Pipeline] End of Pipeline 00:36:59.199 Finished: SUCCESS